2. Installation
Stork can be installed from pre-built packages or from sources. The following sections describe both methods. Unless there’s a good reason to compile from sources, installing from native deb or RPM packages is easier and faster.
2.1. Supported Systems
Stork is tested on the following systems:
Ubuntu 18.04 and 20.04
Fedora 31 and 32
CentOS 8
MacOS 11.3*
Note that MacOS is not and will not be officially supported. Many developers on ISC’s team use Macs, so the goal is to keep Stork buildable on this platform.
The Stork Server and agents are written in the Go language; the server uses a PostgreSQL database. In principle, the software can be run on any POSIX system that has a Go compiler and PostgreSQL. It is likely the software can also be built on other modern systems, but for the time being ISC’s testing capabilities are modest. We encourage users to try running Stork on other OSes not on this list and report their findings to ISC.
2.2. Installation Prerequisites
The Stork Agent
does not require any specific dependencies to run. It can be run immediately after installation.
Stork uses the status-get command to communicate with Kea, and therefore only works with a version of Kea that supports status-get, which was introduced in Kea 1.7.3 and backported to 1.6.3.
Stork requires the premium Host Commands (host_cmds)
hooks library to be loaded by the Kea instance to retrieve host
reservations stored in an external database. Stork does work without the Host Commands hooks library, but will not be able to display
host reservations. Stork can retrieve host reservations stored locally in the Kea configuration without any additional hooks
libraries.
Stork requires the open source Stat Commands (stat_cmds)
hooks library to be loaded by the Kea instance to retrieve lease
statistics. Stork does work without the Stat Commands hooks library, but will not be able to show pool utilization and other
statistics.
Stork uses Go implementation for handling TLS connections, certificates and keys. The secrets are stored in the PostgreSQL database, in the secret table.
For the Stork Server
, a PostgreSQL database (https://www.postgresql.org/) version 10
or later is required. Stork will attempt to run with older versions but may not work
correctly. The general installation procedure for PostgreSQL is OS-specific and is not included
here. However, please note that Stork uses pgcrypto extensions, which often come in a separate package. For
example, a postgresql-crypto package is required on Fedora and postgresql12-contrib is needed on RHEL and CentOS.
These instructions prepare a database for use with the Stork
Server
, with the stork database user and stork password. Next,
a database called stork is created and the pgcrypto extension is
enabled in the database.
First, connect to PostgreSQL using psql and the postgres administration user. Depending on the system’s configuration, this may require switching to the user postgres first, using the su postgres command.
$ psql postgres
psql (11.5)
Type "help" for help.
postgres=#
Then, prepare the database:
postgres=# CREATE USER stork WITH PASSWORD 'stork';
CREATE ROLE
postgres=# CREATE DATABASE stork;
CREATE DATABASE
postgres=# GRANT ALL PRIVILEGES ON DATABASE stork TO stork;
GRANT
postgres=# \c stork
You are now connected to database "stork" as user "thomson".
stork=# create extension pgcrypto;
CREATE EXTENSION
Note
Make sure the actual password is stronger than ‘stork’, which is trivial to guess. Using default passwords is a security risk. Stork puts no restrictions on the characters used in the database passwords nor on their length. In particular, it accepts passwords containing spaces, quotes, double quotes, and other special characters.
2.3. Installing from Packages
Stork packages are stored in repositories located on the Cloudsmith service: https://cloudsmith.io/~isc/repos/stork/packages/. Both Debian/Ubuntu and RPM packages may be found there.
Detailed instructions for setting up the operating system to use this repository are available under the Set Me Up button on the Cloudsmith repository page.
It is possible to install both Stork Agent
and Stork Server
on
the same machine. It is useful in small deployments with a single
monitored machine to avoid setting up a dedicated system for the Stork
Server. In those cases, however, an operator must consider the possible
impact of the Stork Server service on other services running on the same
machine.
2.3.1. Installing the Stork Server
2.3.1.1. Installing on Debian/Ubuntu
The first step for both Debian and Ubuntu is:
$ curl -1sLf 'https://dl.cloudsmith.io/public/isc/stork/cfg/setup/bash.deb.sh' | sudo bash
Next, install the Stork Server
package:
$ sudo apt install isc-stork-server
2.3.1.2. Installing on CentOS/RHEL/Fedora
The first step for RPM-based distributions is:
$ curl -1sLf 'https://dl.cloudsmith.io/public/isc/stork/cfg/setup/bash.rpm.sh' | sudo bash
Next, install the Stork Server
package:
$ sudo dnf install isc-stork-server
If dnf
is not available, yum
can be used instead:
$ sudo yum install isc-stork-server
2.3.1.3. Setup
The following steps are common for Debian-based and RPM-based distributions using systemd.
Configure Stork Server
settings in /etc/stork/server.env
. The following
settings are required for the database connection (they have a common STORK_DATABASE_
prefix):
STORK_DATABASE_HOST - the address of a PostgreSQL database; default is localhost
STORK_DATABASE_PORT - the port of a PostgreSQL database; default is 5432
STORK_DATABASE_NAME - the name of a database; default is stork
STORK_DATABASE_USER_NAME - the username for connecting to the database; default is stork
STORK_DATABASE_PASSWORD - the password for the username connecting to the database
Note
All of the database connection settings have default values, but we strongly recommend protecting the database with a non-default and hard-to-guess password in the production environment. The STORK_DATABASE_PASSWORD setting must be adjusted accordingly.
The remaining settings pertain to the server’s REST API configuration (the STORK_REST_
prefix):
STORK_REST_HOST - IP address on which the server listens
STORK_REST_PORT - port number on which the server listens; default is 8080
STORK_REST_TLS_CERTIFICATE - a file with a certificate to use for secure connections
STORK_REST_TLS_PRIVATE_KEY - a file with a private key to use for secure connections
STORK_REST_TLS_CA_CERTIFICATE - a certificate authority file used for mutual TLS authentication
STORK_REST_STATIC_FILES_DIR - a directory with static files served in the UI
The remaining settings pertain to the server’s Prometheus /metrics
endpoint configuration (the STORK_SERVER_
prefix is for general purposes):
STORK_SERVER_ENABLE_METRICS - enable the Prometheus metrics collector and
/metrics
HTTP endpoint.
Warning
Prometheus /metrics
endpoint doesn’t require authentication. Therefore, securing this endpoint
from external access is highly recommended to avoid unauthorized parties gathering the server’s
metrics. One way to restrict endpoint access is by using appropriate HTTP proxy configuration
to allow only local access or access from the Prometheus host. Please consult the NGINX example
configuration file shipped with Stork.
With the settings in place, the Stork Server
service can now be enabled and
started:
$ sudo systemctl enable isc-stork-server
$ sudo systemctl start isc-stork-server
To check the status:
$ sudo systemctl status isc-stork-server
Note
By default, the Stork Server
web service is exposed on port 8080 and
can be tested using web browser at http://localhost:8080. To use a different IP address or port,
please set the STORK_REST_HOST and STORK_REST_PORT variables in the /etc/stork/stork.env
file.
The Stork Server
can be configured to run behind an HTTP reverse proxy
using Nginx or Apache. The Stork Server
package contains an example
configuration file for Nginx, in /usr/share/stork/examples/nginx-stork.conf.
2.3.1.4. Securing Database Connection
PostgreSQL server can be configured to encrypt communications between the clients and the server. Detailed information on how to enable encryption on the database server and how to create the suitable certificate and key files is available in PostgreSQL documentation.
Stork Server has support for secure communications with the database. The following configuration settings in the server.env file enable and configure communication encryption with the database server. They correspond with the SSL settings provided by the libpq - the native PostgreSQL client library written in C:
STORK_DATABASE_SSLMODE - the SSL mode for connecting to the database (i.e., disable, require, verify-ca or verify-full); default is disable
STORK_DATABASE_SSLCERT - the location of the SSL certificate used by the server to connect to the database
STORK_DATABASE_SSLKEY - the location of the SSL key used by the server to connect to the database
STORK_DATABASE_SSLROOTCERT - the location of the root certificate file used to verify the database server’s certificate
The default SSL mode setting, disable
, configures the server to use unencrypted
communication with the database. Other settings have the following meaning:
require
- use secure communication but do not verify the server’s identity unless the root certificate location is specified and that certificate exists. If the root certificate exists, the behavior is the same as in case of verify-ca modeverify-ca
- use secure communication and verify the server’s identity by checking it against the root certificate stored on the Stork Server machineverify-full
- use secure communication, verify the server’s identity against the root certificate. In addition, check that the server hostname matches the name stored in the certificate.
Specifying the SSL certificate and key location is optional. If they are not specified, the Stork Server will use the ones from the current user’s home directory: ~/.postgresql/postgresql.crt and ~/.postgresql/postgresql.key. If they are not present, Stork will try to find suitable keys in common system locations.
Please consult libpq documentation for similar libpq configuration details.
2.3.2. Installing the Stork Agent
There are two ways to install packaged Stork Agent
on a monitored machine.
The first method is to use the Cloudsmith repository like in the case of the
Stork Server
installation. The second method is to use an installation
script provided by the Stork Server
which downloads the agent packages
embedded in the server package. The second installation method is supported
since the Stork 0.15.0 release. The preferred installation method depends on
the selected agent registration type. Supported registration methods are
described in the Securing Connections Between Stork Server and Stork Agents.
2.3.2.1. Agent Configuration Settings
The following are the Stork Agent
configuration settings available in the
/etc/stork/agent.env
after installing the package. All these settings use
the STORK_AGENT_ prefix to indicate that they configure the Stork Agent.
The general settings:
STORK_AGENT_HOST - the IP address of the network interface or DNS name which Stork Agent should use to receive the connections from the server; default is 0.0.0.0 (i.e. listen on all interfaces)
STORK_AGENT_PORT - the port number the agent should use to receive the connections from the server; default is 8080
STORK_AGENT_LISTEN_STORK_ONLY - enable Stork functionality only, i.e. disable Prometheus exporters; default is false
STORK_AGENT_LISTEN_PROMETHEUS_ONLY - enable Prometheus exporters only, i.e. disable Stork functionality; default is false
STORK_AGENT_SKIP_TLS_CERT_VERIFICATION - skip TLS certificate verification when the Stork Agent connects to Kea over TLS and Kea uses self-signed certificates; default is false
The following settings are specific to the Prometheus exporters:
STORK_AGENT_PROMETHEUS_KEA_EXPORTER_ADDRESS - the IP address or hostname the agent should use to receive the connections from Prometheus fetching Kea statistics; default is 0.0.0.0
STORK_AGENT_PROMETHEUS_KEA_EXPORTER_PORT - the port the agent should use to receive the connections from Prometheus fetching Kea statistics; default is 9547
STORK_AGENT_PROMETHEUS_KEA_EXPORTER_INTERVAL - specifies how often the agent collects stats from Kea, in seconds; default is 10
STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_ADDRESS - the IP address or hostname the agent should use to receive the connections from Prometheus fetching BIND9 statistics; default is 0.0.0.0 to listen on for incoming Prometheus connection; default is 0.0.0.0
STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_PORT - the port the agent should use to receive the connections from Prometheus fetching BIND9 statistics; default is 9119
STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_INTERVAL - specifies how often the agent collects stats from BIND9, in seconds; default is 10
The last setting is used only when Stork Agents
register in the Stork Server
using agent token:
STORK_AGENT_SERVER_URL - Stork Server URL used by the agent to send REST commands to the server during agent registration
Warning
Stork Server currently does not support communication with the Stork Agents via an IPv6 link-local address with zone ID (e.g., fe80::%eth0). It means that the STORK_AGENT_HOST variable must be set to a DNS name, an IPv4 address, or a non-link-local IPv6 address.
2.3.2.2. Securing Connections Between Stork Server and Stork Agents
Connections between the server and the agents are secured using standard cryptography solutions, i.e. PKI and TLS.
The server generates the required keys and certificates during its first startup. They are used to establish safe, encrypted connections between the server and the agents with authentication of both ends of these connections. The agents use the keys and certificates generated by the server to create agent-side keys and certificates during the agents’ registration procedure described in the next sections. The private key and CSR certificate generated by an agent and signed by the server are used for authentication and connection encryption.
An agent can be registered in the server using one of the two supported methods:
using agent token,
using server token.
In the first case, an agent generates a token and passes it to the server
requesting registration. The server associates the token with the particular
agent. A Stork super admin must approve the registration request in the web UI,
ensuring that the token displayed in the UI matches the agent’s token in the
logs. The Stork Agent
is typically installed from the Cloudsmith repository
when this registration method is used.
In the second registration method, a server generates a token common for all
new registrations. The super admin must copy the token from the UI and paste
it into the agent’s terminal during the interactive agent registration procedure.
This registration method does not require any additional approval of the agent’s
registration request in the web UI. If the pasted server token is correct,
the agent should be authorized in the UI when the interactive registration
completes. The Stork Agent
is typically installed using a script that
downloads the agent packages embedded in the server when this registration
method is used.
The applicability of the two methods is described in Registration Methods Summary.
The installation and registration process using both methods are described in the subsequent sections.
2.3.2.3. Securing Connections Between Stork Agent and Kea Control Agent
The Kea Control Agent may be configured to accept connections only over TLS. It requires specifying trust-anchor, cert-file and key-file values in the kea-ctrl-agent.conf. For details, see the Kea Administrator Reference Manual.
The Stork Agent can communicate with Kea over TLS. It will use the same certificates that it uses in communication with the Stork Server.
The Stork Agent by default requires that the Kea Control Agent provides a trusted TLS certificate. If Kea uses a self-signed certificate the Stork Agent can be launched with the –skip-tls-cert-verification flag or STORK_AGENT_SKIP_TLS_CERT_VERIFICATION environment variable set to 1 to disable Kea certificate verification.
The Kea CA accepts only requests signed with a trusted certificate when the cert-required parameter is set to true in the Kea CA configuration file. In this case, the Stork Agent must use the valid certificates (it cannot use self-signed certificates as created during Stork Agent registration).
Kea 1.9.0 added support for basic HTTP authentication to control access for incoming REST commands over HTTP. If Kea CA is configured to use the Basic Auth, you need to provide valid credentials in the Stork Agent’s credentials file: /etc/stork/agent-credentials.json.
By default, this file is missing, but there is /etc/stork/agent-credentials.json.template with example data. You can rename the template file by removing the .template suffix. Next, you can edit this file and provide valid credentials. You should also use the chown and chmod commands to set the proper permissions - this file contains the secrets and should be readable/writable only by the user running the Stork Agent and the administrators.
Warning
Basic HTTP authentication is weak on its own as there are known dictionary attacks, but those attacks require man-in-the-middle to get access to the HTTP traffic. That can be eliminated by using basic HTTP authentication exclusively over TLS. In fact, if possible, using client certificates for TLS is better than using basic HTTP authentication.
For example:
{
"basic_auth": [
{
"ip": "127.0.0.1",
"port": 8000,
"user": "foo",
"password": "bar"
}
]
}
It contains a single object with a single “basic” key. The “basic” value is a list of the Basic Auth credentials. All credentials must contain the values for 4 keys:
ip
- IPv4 or IPv6 address of the Kea CA. It supports IPv6 abbreviations (e.g. “FF:0000::” is the same as “ff::”).port
- Kea Control Agent port number.user
- Basic Auth user-id to use in connection to specific Kea CA.password
- Basic Auth password to use in connection to specific Kea CA.
To apply changes in the credentials file you need to restart the Stork Agent daemon.
If the credentials file is invalid the Stork Agent will run as usual but without the Basic Auth support. It will be indicated with a specific message in the log.
2.3.2.4. Installation from Cloudsmith and Registration with an Agent Token
This section describes installing an agent from the Cloudsmith repository and performing the agent’s registration using an agent token.
The Stork Agent
installation steps are similar to the Stork Server
installation steps described in Installing on Debian/Ubuntu and
Installing on CentOS/RHEL/Fedora. Use one of the following commands depending on
your Linux distribution:
$ sudo apt install isc-stork-agent
$ sudo dnf install isc-stork-agent
in place of the commands installing the server.
Next, specify the required settings in the /etc/stork/agent.env
file.
The STORK_AGENT_SERVER_URL
should be the URL on which the server receives the
REST connections, e.g. http://stork-server.example.org:8080
. The
STORK_AGENT_HOST
should point to the agent’s address (or name), e.g.
stork-agent.example.org
. Finally, a non-default agent port can be
specified with the STORK_AGENT_PORT
.
Note
Even though the examples provided in this documentation use the http
scheme, we highly recommend using secure protocols in the production
environments. We use http
in the examples because it usually
makes it easier to start testing the software and eliminate all issues
unrelated to the use of https
before it is enabled.
Start the agent service:
$ sudo systemctl enable isc-stork-agent
$ sudo systemctl start isc-stork-agent
To check the status:
$ sudo systemctl status isc-stork-agent
You should expect the following log messages when the agent successfully sends the registration request to the server:
machine registered
stored agent signed cert and CA cert
registration completed successfully
A server administrator must approve the registration request via the
web UI before the machine can be monitored. Visit the Services -> Machines
page. Click the Unauthorized
button located above the list of machines
on the right side. This list contains all machines pending registration approval.
Before authorizing the machine, ensure that the agent token displayed on this
list is the same as the agent token in the agent’s logs or the
/var/lib/stork-agent/tokens/agent-token.txt
file. If they match,
click on the Action
button and select Authorize
. The machine
should now be visible on the list of authorized machines.
2.3.2.5. Installation with a Script and Registration with a Server Token
This section describes installing an agent using a script and packages
downloaded from the Stork Server
and performing the agent’s
registration using a server token.
Open Stork in the web browser and log in as a user from the super admin group.
Select Services
and then Machines
from the menu. Click on the
How to Install Agent on New Machine
button to display the agent
installation instructions. Copy-paste the commands from the displayed
window into the terminal on the machine where the agent is installed.
These commands are also provided here for convenience:
$ wget http://stork.example.org:8080/stork-install-agent.sh
$ chmod a+x stork-install-agent.sh
$ sudo ./stork-install-agent.sh
Please note that this document provides an example URL of the Stork Server
and it must be replaced with a server URL used in the particular deployment.
The script downloads an OS specific agent package from the Stork Server
(deb or RPM), installs the package, and starts the agent’s registration procedure.
In the agent machine’s terminal, a prompt for a server token is presented:
>>>> Server access token (optional):
The server token is available for a super admin user after clicking on the
How to Install Agent on New Machine
button in the Services -> Machines
.
Copy the server token from the dialog box and paste it in the prompt
displayed on the agent machine.
The following prompt appears next:
>>>> IP address or FQDN of the host with Stork Agent (the Stork Server will use it to connect to the Stork Agent):
Specify an IP address or FQDN which the server should use to reach out to an agent via the secure gRPC channel.
When asked for the port:
>>>> Port number that Stork Agent will use to listen on [8080]:
specify the port number for the gRPC connections, or hit Enter if the default port 8080 matches your settings.
If the registration is successful, the following messages are displayed:
machine ping over TLS: OK
registration completed successfully
Unlike the Installation from Cloudsmith and Registration with an Agent Token, this registration method does not require approval via the web UI. The machine should be already listed among the authorized machines.
2.3.2.6. Installation with a Script and Registration with an Agent Token
This section describes installing an agent using a script and packages downloaded from
the Stork Server
and performing the agent’s registration using an agent token. It
is an interactive procedure alternative to the procedure described in
Installation from Cloudsmith and Registration with an Agent Token.
Start the interactive registration procedure following the steps in the Installation with a Script and Registration with a Server Token.
In the agent machine’s terminal, a prompt for a server token is presented:
>>>> Server access token (optional):
Because this registration method does not use the server token, do not type anything in this prompt. Hit Enter to move on.
The following prompt appears next:
>>>> IP address or FQDN of the host with Stork Agent (the Stork Server will use it to connect to the Stork Agent):
Specify an IP address or FQDN which the server should use to reach out to an agent via the secure gRPC channel.
When asked for the port:
>>>> Port number that Stork Agent will use to listen on [8080]:
specify the port number for the gRPC connections, or hit Enter if the default port 8080 matches your settings.
You should expect the following log messages when the agent successfully sends the registration request to the server:
machine registered
stored agent signed cert and CA cert
registration completed successfully
Similar to Installation from Cloudsmith and Registration with an Agent Token, the agent’s registration request must be approved in the UI to start monitoring the newly registered machine.
2.3.2.7. Installation from Cloudsmith and Registration with a Server Token
This section describes installing an agent from the Cloudsmith repository and performing the agent’s registration using a server token. It is an alternative to the procedure described in Installation with a Script and Registration with a Server Token.
The Stork Agent
installation steps are similar to the Stork Server
installation steps described in Installing on Debian/Ubuntu and
Installing on CentOS/RHEL/Fedora. Use one of the following commands depending on
your Linux distribution:
$ sudo apt install isc-stork-agent
$ sudo dnf install isc-stork-agent
in place of the commands installing the server.
Start the agent service:
$ sudo systemctl enable isc-stork-agent
$ sudo systemctl start isc-stork-agent
To check the status:
$ sudo systemctl status isc-stork-agent
Start the interactive registration procedure with the following command:
$ su stork-agent -s /bin/sh -c 'stork-agent register -u http://stork.example.org:8080'
where the last parameter should be the appropriate Stork Server’s URL.
Follow the same registration steps as described in the Installation with a Script and Registration with a Server Token.
2.3.2.8. Registration Methods Summary
Stork supports two different agents’ registration methods described above. Both methods can be used interchangeably, and it is often a matter of preference which one the administrator selects. However, it is worth mentioning that the agent token registration may be more suitable in some situations. This method requires a server URL, agent address (or name), and agent port as registration settings. If they are known upfront, it is possible to prepare a system (or container) image with the agent offline. After starting the image, the agent will send the registration request to the server and await authorization in the web UI.
The agent registration with the server token is always manual. It requires copying the token from the web UI, logging into the agent, and pasting the token. Therefore, the registration using the server token is not appropriate when it is impossible or awkward to access the machine’s terminal, e.g. in Docker. On the other hand, the registration using the server token is more straightforward because it does not require unauthorized agents’ approval via the web UI.
If the server token leaks, it poses a risk that rogue agents register. In that case, the administrator should regenerate the token to prevent the uncontrolled registration of new agents. Regeneration of the token does not affect already registered agents. The new token must be used for the new registrations.
The server token can be regenerated in the How to Install Agent on New Machine
dialog box available after entering the Services -> Machines
page.
2.3.2.9. Agent Setup Summary
After successful agent setup, the agent periodically tries to detect installed
Kea DHCP or BIND 9 services on the system. If it finds them, they are
reported to the Stork Server
when it connects to the agent.
Further configuration and usage of the Stork Server
and the
Stork Agent
are described in the Using Stork chapter.
2.3.2.10. Inspecting Keys and Certificates
Stork Server maintains TLS keys and certificates internally for securing
communication between Stork Server
and Stork Agents
. They can be inspected
and exported using Stork Tool
, e.g:
$ stork-tool cert-export --db-url postgresql://user:pass@localhost/dbname -f srvcert -o srv-cert.pem
The above command may fail when the database password contains the characters requiring URL encoding. In this case, a command line with multiple switches can be used instead:
$ stork-tool cert-export --db-user user --db-password pass --db-host localhost --db-name dbname -f srvcert -o srv-cert.pem
The certificates can be inspected using openssl (e.g. openssl x509 -noout -text -in srv-cert.pem
).
Similarly, the secret keys can be inspected in similar fashion (e.g. openssl ec -noout -text -in cakey
)
For more details check stork-tool
manual: stork-tool - A tool for managing Stork Server. There are five secrets that can be
exported or imported: Certificate Authority secret key (cakey
), Certificate Authority certificate (cacert
),
Stork Server private key (srvkey
), Stork Server certificate (srvcert
) and a server token (srvtkn
).
2.3.2.11. Using External Keys and Certificates
It is possible to use external TLS keys and certificates. They can be imported
to Stork Server
using stork-tool
:
$ stork-tool cert-import --db-url postgresql://user:pass@localhost/dbname -f srvcert -i srv-cert.pem
The above command may fail when the database password contains the characters requiring URL encoding. In this case, a command line with multiple switches can be used instead:
$ stork-tool cert-import --db-user user --db-password pass --db-host localhost --db-name dbname -f srvcert -i srv-cert.pem
Both CA key and CA certificate have to be changed at the same time as CA certificate depends on CA key. If they are changed then server key and certificate also need to be changed.
The capability to use external certificates and key is considered experimental.
For more details check stork-tool
manual: stork-tool - A tool for managing Stork Server.
2.3.3. Upgrading
Due to the new security model introduced with TLS in Stork 0.15.0 release, upgrades from versions 0.14.0 and earlier require registering the agents from scratch.
Server upgrade procedure looks the same as the installation procedure.
First, install the new packages on the server. Installation scripts in deb/RPM package will perform the required database and other migrations.
2.4. Installing From Sources
2.4.1. Compilation Prerequisites
Usually, it is more convenient to install Stork using native packages. See Supported Systems and Installing from Packages for details regarding supported systems. However, the sources can also be built separately.
The dependencies that need to be installed to build Stork
sources are:
Rake
Java Runtime Environment (only if building natively, not using Docker)
Docker (only if running in containers; this is needed to build the demo)
Other dependencies are installed automatically in a local directory by Rake tasks. This does not require root privileges. If the demo environment will be run, Docker is needed but not Java (Docker will install Java within a container).
For details about the environment, please see the Stork wiki at https://gitlab.isc.org/isc-projects/stork/-/wikis/Install .
2.4.2. Download Sources
The Stork sources are available on the ISC GitLab instance: https://gitlab.isc.org/isc-projects/stork.
To get the latest sources invoke:
$ git clone https://gitlab.isc.org/isc-projects/stork
2.4.3. Building
There are two Stork
components:
Stork Agent
- this is the binary stork-agent, written in GoStork Server
- this is comprised of two parts: - backend service - written in Go - frontend - an Angular application written in Typescript
All components can be built using the following command:
$ rake build_all
The agent component is installed using this command:
$ rake install_agent
and the server component with this command:
$ rake install_server
By default, all components are installed in the root folder in the
current directory; however, this is not useful for installation in a
production environment. It can be customized via the DESTDIR
variable, e.g.:
$ sudo rake install_server DESTDIR=/usr
2.5. Stork Tool (optional)
To initialize the database directly, the Stork Tool must be built and used to initialize and upgrade the database to the latest schema. However, this is optional, as the database migration is triggered automatically upon server startup. It is only useful if, for some reason, it is desirable to set up the database but not yet run the server.
Stork Tool also provides the commands to import and export TLS certificates in the database and should be built whenever such capability is needed. See :ref:`inspecting-keys-and-certificates for usage details.
$ rake build_tool
$ backend/cmd/stork-tool/stork-tool db-init
$ backend/cmd/stork-tool/stork-tool db-up
The up and down commands have an optional -t parameter that specifies the desired schema version. This is only useful when debugging database migrations.
$ # migrate up version 25
$ backend/cmd/stork-tool/stork-tool db-up -t 25
$ # migrate down back to version 17
$ backend/cmd/stork-tool/stork-tool db-down -t 17
Note that the server requires the latest database version to run, always runs the migration on its own, and will refuse to start if the migration fails for any reason. The migration tool is mostly useful for debugging problems with migration or migrating the database without actually running the service. For complete reference, see the manual page here: stork-tool - A tool for managing Stork Server.
To debug migrations, another useful feature is SQL tracing using the –db-trace-queries parameter.
It takes either “all” (trace all SQL operations, including migrations and run-time) or “run” (just
trace run-time operations, skip migrations). If specified without any parameters, “all” is assumed. With it enabled,
stork-tool prints out all its SQL queries on stderr. For example, these commands can be used
to generate an SQL script that updates the schema. Note that for some migrations, the steps are
dependent on the contents of the database, so this is not a universal Stork schema. This parameter
is also supported by the Stork Server
.
$ backend/cmd/stork-tool/stork-tool db-down -t 0
$ backend/cmd/stork-tool/stork-tool db-up --db-trace-queries 2> stork-schema.txt
2.6. Integration With Prometheus and Grafana
Stork can optionally be integrated with Prometheus, an open-source monitoring and alerting toolkit, and Grafana, an easy-to-view analytics platform for querying, visualization, and alerting. Grafana requires external data storage. Prometheus is currently the only environment supported by both Stork and Grafana. It is possible to use Prometheus without Grafana, but using Grafana requires Prometheus.
2.6.1. Prometheus Integration
The Stork Agent, by default, makes the
Kea and some limited BIND 9 statistics available in a format understandable by Prometheus. In Prometheus nomenclature, the Stork
Agent works as a Prometheus exporter. If the Prometheus server is available, it can be configured to monitor Stork Agents. To enable Stork Agent
monitoring, the prometheus.yml
(which is typically stored in /etc/prometheus/, but this may vary depending on the
installation) must be edited to add the following entries there:
# statistics from Kea
- job_name: 'kea'
static_configs:
- targets: ['agent-kea.example.org:9547', 'agent-kea6.example.org:9547', ... ]
# statistics from bind9
- job_name: 'bind9'
static_configs:
- targets: ['agent-bind9.example.org:9119', 'another-bind9.example.org:9119', ... ]
By default, the Stork Agent exports Kea data on TCP port 9547 (and BIND 9 data on TCP port 9119). This can be configured using command-line parameters, or the Prometheus export can be disabled altogether. For details, see the stork-agent manual page at stork-agent - Stork Agent that monitors BIND 9 and Kea services.
The Stork Server can be optionally integrated too, but the Prometheus support is disabled by default. To enable it
you need to run the server with the -m
or --metrics
flag or set the STORK_SERVER_ENABLE_METRICS
environment variable.
Next, you should update the prometheus.yml
file:
# statistics from Stork Server
- job_name: 'storkserver'
static_configs:
- targets: ['server.example.org:8080']
The Stork Server exports metrics on the assigned HTTP/HTTPS port (defined via --rest-port
flag).
Note
The Prometheus client periodically collects metrics from the clients (Stork Server or Stork Agent, for example).
It is done via an HTTP call. By convention, the endpoint that shares the metrics has the /metrics
path.
This endpoint returns data in Prometheus-specific format.
Warning
Prometheus /metrics
endpoint doesn’t require authentication. Therefore, securing this endpoint
from external access is highly recommended to avoid unauthorized parties gathering the server’s
metrics. One way to restrict endpoint access is by using appropriate HTTP proxy configuration
to allow only local access or access from the Prometheus host. Please consult the NGINX example
configuration file shipped with Stork.
After restarting, the Prometheus web interface can be used to inspect whether statistics are exported properly.
Kea statistics use the kea_
prefix (e.g. kea_dhcp4_addresses_assigned_total); BIND 9
statistics will eventually use the bind_
prefix (e.g. bind_incoming_queries_tcp); Stork Server statistics use the
storkserver_
prefix.
2.6.2. Alerting in Prometheus
Prometheus provides capability to configure altering. A good starting point is the Prometheus documentation on Alterting. Briefly, the three main steps are configure Alertmanager, configure Prometheus to talk to the Alertmanager and then define the alerting rules in Prometheus. There are no specific requirements or recommendations as these are very deployment dependant. The following is an incomplete list of ideas that could be considered:
The
storkserver_auth_unreachable_machine_total
is reported by the Stork server and shows the number of unreachable machines. Its value under normal circumstances should be zero. Configuring an alert for non-zero values may be the highest level for large scope problem, such as whole VM or server becoming unavailable.The
storkserver_auth_authorized_machine_total
andstorkserver_auth_unauthorized_machine_total
metrics may be used to monitor situations when new machines (e.g. by automated VM cloning) may appear in the network or existing machines could disappear.The
kea_dhcp4_addresses_assigned_total
together withkea_dhcp4_addresses_total
can be used to calculate pool utilization. If the server allocates all available addresses, it won’t be able to handle new devices, which is one of the most common failure cases of the DHCPv4 server. Depending on the deployment specifics, a threshold when the pool utilization approaches 100% should be seriously considered.Contrary to popular belief, DHCPv6 can also run out of resources, in particular in case of Prefix Delegation. The
kea_dhcp6_pd_assigned_total
divided bykea_dhcp6_pd_total
can be considered a PD pool utilization. It is an important metric if PD is being used.
Compared to Grafana alerting, the alerting mechanism configured in Prometheus has the relative advantage of not requiring additional component (Grafana). The alerting rules are defined in a text file using simple YAML syntax. For details, see Prometheus documentation on alerting rules. One potentially important feature is Prometheus’ ability to be configured to automatically discover available Alertmanager instances, which may be helpful in various redundancy considerations. The Alertmanager provides a rich list of receivers, which are that actual notification mechanisms used: email, pagerduty, pushover, slack, opsgenie, webhook, wechat and more.
ISC makes no specific recommendations between Prometheus or Grafana. This is a deployment consideration.
2.6.3. Grafana Integration
Stork provides several Grafana templates that can easily be imported. Those are available in the grafana/
directory of the
Stork source code. The currently available templates are bind9-resolver.json, kea-dhcp4.json and kea-dhcp6.json. Grafana integration requires three steps:
1. Prometheus must be added as a data source. This can be done in several ways, including via the user interface to edit the Grafana configuration files. This is the easiest method; for details, see the Grafana documentation about Prometheus integration. Using the Grafana user interface, select Configuration, select Data Sources, click “Add data source,” and choose Prometheus, and then specify the necessary parameters to connect to the Prometheus instance. In test environments, the only really necessary parameter is the URL, but authentication is also desirable in most production deployments.
2. Import the existing dashboard. In the Grafana UI, click Dashboards, then Manage, then Import, and select one of the templates, e.g. kea-dhcp4.json. Make sure to select the Prometheus data source added in the previous step. Once imported, the dashboard can be tweaked as needed.
3. Once Grafana is configured, go to the Stork user interface, log in as super-admin, click Settings in the Configuration menu, and then add the URLs to Grafana and Prometheus that point to the installations. Once this is done, Stork will be able to show links for subnets leading to specific subnets.
Alternatively, a Prometheus data source can be added by editing datasource.yaml (typically stored in /etc/grafana, but this may vary depending on the installation) and adding entries similar to this one:
datasources:
- name: Stork-Prometheus instance
type: prometheus
access: proxy
url: http://prometheus.example.org:9090
isDefault: true
editable: false
Also, the Grafana dashboard files can be copied to /var/lib/grafana/dashboards/ (again, this may vary depending on the installation).
Example dashboards with some live data can be seen in the Stork screenshots gallery .
2.6.4. Subnet identification
Kea CA shares subnet statistics labeled with the internal Kea IDs. The Prometheus/Grafana subnet labels depend on the installed Kea hooks. By default, the internal, numeric Kea IDs are used. But if the subnet_cmds hook is installed then the numeric IDs are resolved to subnet prefixes. It causes that the Grafana dashboard looks more human-friendly and descriptive.
2.6.5. Alerting in Grafana
Grafana provides an alternative alerting mechanism that can also be used with Stork. It offers multiple options and the user is encouraged to see the Grafana page on alerting.
The list of notification channels (i.e. the delivery mechanisms) looks flexible, as it supports email, webhook, Prometheus’ Alertmanager, PagerDuty, Slack, Telegram, Discord, Google Hangouts, Kafka REST Proxy, Microsoft Teams, OpsGenie, Pushover and more. Existing dashboards provided by Stork can be modified or new dashboards can be created. Grafana requires first a notification channel to be configured (Alerting -> Notifications Channel menu). Once configured, existing panels can be edited with alert rules. One caveat is that most panels in the Stork dashboards use template variables, which are not supported in alerting. This stackoverflow thread discusses several ways to overcome this limitation.
Compared to Prometheus alerting, Grafana alerting seems to be a bit more user friendly as the alerts are set using web interface, with flexible approach that allows custom notification message (possibly with some instructions what to do when receiving such alert), how to treat situations where received data is null or there is a timeout etc.
The defined alerts are considered an integral part of a dashboard. This may be a factor in a deployment configuration, e.g. the dashboard could be tweaked to specific needs and then deployed to multiple sites.