In this lesson we will walk through all the measures necessary to be taken so PRTG is able to connect to a remote docker socket for docker container Monitoring.
We assume you have at least a basic understanding of Docker and Linux. If you want to refresh your knowledge, we recommend looking at the lesson Docker Basics.
Explaining Linux would be far out of scope for this lesson, but it’s likely an answer to any Linux related question is out there on the internet.
Anyway, if you read carefully, the listed commands should work with only minor adjustments.
Monitoring as much of your IT Infrastructure as possible has a lot of benefits, discovering bottlenecks and gaining insights for predictive measures being only the tip of the iceberg.
PRTG is a solid monitoring solution already present and actively used in a lot of IT departments. Because there are a lot of different monitoring solutions out there, this article is targeted to be compatible with the way PRTG handles Docker Container Monitoring.
PRTG requires the Docker Socket to be exposed to the network, which is not the case on a default setup. The reason for the port not being exposed by default is because of security reasons.
An exposed and unsecured port could lead to a major security issue! Whoever is able to connect to the docker socket could easily gain full control on the system – meaning root access.
Therefore it is really important to handle these configurations with care. The measurement we are going to take is to secure the remote access by using TLS certificates. You can read more about this in the docker docs.
A guide on the PRTG Docker Container Sensor can be found here.
First of all we need to create a bunch of certificates. There are basically two options for doing this.
We are going to use the second option, which means all certificates are going to be self-signed, but that’s totally fine for the purpose of this lesson.
All instructions for the creation of the certificates can be found in the docker docs. To simplify this a little bit, we created a small script that executes all the commands for you.
All the steps below assume you are going to use the script. The script is non-interactive, meaning you do not have to enter anything during execution. The generated certificates won’t be password protected and are valid for 50 years.
Create a directory called .docker in your home directory. This directory is the default directory where the Docker CLI stores all its information.
$ mkdir -p ~/.docker
Clone the script into the previously created directory.
$ git clone https://gist.github.com/6f6b9a85e136b37cd52983cb88596158.git ~/.docker/
Change into the directory.
$ cd ~/.docker/
Make the script executable.
$ chmod +x genCerts.sh
Then we need to adjust a few things within the script.
$ nano genCerts.sh
Adjust the HOST to match your hostname and the last IP of the HOSTS string to match your host ip address.
This is how it looks for my setup.
Now we are ready to execute the script.
$ sh genCerts.sh
The output should be somewhat like this.
# Start # Generate CA private and public keys Generating RSA private key, 4096 bit long modulus (2 primes) .................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................++++ ...............++++ e is 65537 (0x010001) Create a server key Generating RSA private key, 4096 bit long modulus (2 primes) .++++ ..........................................................................++++ e is 65537 (0x010001) Create certificate signing request Sign the public key with CA Signature ok subject=CN = cybus.io Getting CA Private Key Create a client key and certificate signing request Generating RSA private key, 4096 bit long modulus (2 primes) .................................................................................................................................++++ ...............................................................................................................................................................................................................................................................................................................++++ e is 65537 (0x010001) Make the key suitable for client authentication Generate the signed certificate Signature ok subject=CN = client Getting CA Private Key Remove the two certificate signing requests and extensions config removed 'client.csr' removed 'server.csr' removed 'extfile.cnf' removed 'extfile-client.cnf'
To verify all certificates have been generated successfully we inspect the content of the directory.
These files should be present. If there are more files than this, that’s no issue.
ca-key.pem ca.pem ca.srl cert.pem genCerts.sh key.pem server-cert.pem server-key.pem
The last step is to locate the full path to where the certificates live.
This is the output in my case. Yours will look a little bit different.
With all the necessary certificates in place, we have to expose them to the docker daemon. We can find the position of the responsible configuration file by checking the status of the docker service.
$ sudo systemctl status docker.service
As stated in the output, the configuration file is located at /lib/systemd/system/docker.service
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2022-05-02 10:26:56 EDT; 33s ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 468 (dockerd) Tasks: 9 Memory: 109.2M CPU: 307ms CGroup: /system.slice/docker.service └─468 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
To adjust the configuration to our needs, we are going to open the configuration using sudo privileges.
$ sudo nano /lib/systemd/system/docker.service
Find the line starting with ExecStart=/usr/bin/dockerd -H fd:// and add the following content to it. Be sure to use the correct path for your setup.
-H tcp://0.0.0.0:2376 --tlsverify=true --tlscacert=/home/jan/.docker/ca.pem --tlscert=/home/jan/.docker/server-cert.pem --tlskey=/home/jan/.docker/server-key.pem
For me the complete line looks like this.
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2376 --tlsverify=true --tlscacert=/home/jan/.docker/ca.pem --tlscert=/home/jan/.docker/server-cert.pem --tlskey=/home/jan/.docker/server-key.pem --containerd=/run/containerd/containerd.sock
Flush the changes and restart the docker service.
$ sudo systemctl daemon-reload $ sudo systemctl restart docker
Now we can verify our changes did take effect.
$ sudo systemctl status docker.service
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2022-05-03 04:56:12 EDT; 2min 32s ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 678 (dockerd) Tasks: 9 Memory: 40.8M CPU: 236ms CGroup: /system.slice/docker.service └─678 /usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2376 --tlsverify=true --tlscacert=/home/jan/.docker/ca.pem --tlscert=/home/jan/.docker/server-cert.pem --tlskey=/home/jan/.docker/server-key.pem --containerd=/run/containerd/containerd.sock
Now we can use the Docker CLI to connect to the Docker Daemon using the specified port. The important part is to use –tlsverify=true as this tells the Docker CLI to use the generated certificates located in our home directory ( ~/.docker).
Remember to adjust the IP address in the second line with your individual one.
$ docker -H 127.0.0.1:2376 --tlsverify=true version $ docker -H 172.16.0.131:2376 --tlsverify=true version
This is the output of both commands on my system.
Client: Docker Engine - Community Version: 20.10.14 API version: 1.41 Go version: go1.16.15 Git commit: a224086 Built: Thu Mar 24 01:48:21 2022 OS/Arch: linux/amd64 Context: default Experimental: true Server: Docker Engine - Community Engine: Version: 20.10.14 API version: 1.41 (minimum version 1.12) Go version: go1.16.15 Git commit: 87a90dc Built: Thu Mar 24 01:46:14 2022 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.5.11 GitCommit: 3df54a852345ae127d1fa3092b95168e4a88e2f8 runc: Version: 1.0.3 GitCommit: v1.0.3-0-gf46b6ba docker-init: Version: 0.19.0 GitCommit: de40ad0
The last and final step is to install the docker sensor inside of PRTG. This should be fairly easy to accomplish by following the provided instructions from https://www.paessler.com/manuals/prtg/docker_container_status_sensor.
This article will be covering Docker including the following topics:
Maybe this sounds familiar. You have been assigned a task in which you had to deploy a complex software onto an existing infrastructure. As you know there are a lot of variables to this which might be out of your control. The operating system, pre existing dependencies, probably even interfering software. Even if the environment is perfect at the moment of the deployment what happens after you are done? Living systems constantly change. New software is introduced while old and outdated software and libraries are getting removed. Parts of the system that you rely on today might be gone tomorrow.
This is where virtualization comes in. It used to be best practice to create isolated virtual computer systems, so called virtual machines (VMs), which simulate independent systems with their own operating systems and libraries. Using these VMs you can run any kind of software in a separated and clean environment without the fear of collisions with other parts of the system. You can emulate the exact hardware you need, install the OS you want and include all the software you are dependent on at just the right version. It offers great flexibility.
It also means that these VMs are very demanding on your host system. The hardware has to be strong enough to create virtual hardware for your virtual systems. They also have to be created and installed for every virtual system that you are using. Even though they might run on the same host sharing resources between them is just as inconvenient as with real machines.
Introducing the container approach and one of their main competitors, Docker. Simply put, Docker enables you to isolate your software into containers. The only thing you need is a running instance of Docker on your host. Even better: All the necessary resources like OS and libraries cannot only be deployed with your software, they can even be shared between individual instances of your containers running on the same system! This is a big improvement above regular VMs. Sounds too good to be true?
Well, even though Docker comes with everything you need, it is still up to you to assure consistency and reproducibility of your own containers. In the following article I will slowly introduce you to Docker and give you the basic knowledge necessary to be part of the containerized world.
Before we can start creating containers we first have to get Docker running on our system. Docker is available for Linux, Mac and just recently for Windows 10. Just choose the version that is right for you and come back right here once you are done:
Please notice that the official documentation contains instructions for multiple Linux distributions, so just choose the one that fits your needs.
Even though the workflow is very similar for all platforms, the rest of the article will assume that you are running an Unix environment. Commands and scripts can vary when you are running on Windows 10.
Got Docker installed and ready to go? Great! Let’s get our hands on creating the first container. Most tutorials will start off by running the tried and true „Hello World“ example but chances are you already did it when you were installing Docker.
So lets start something from scratch! Open your shell and type the following:
docker run -p 8080:80 httpd
If everything went well you will get a response like this:
Unable to find image 'httpd:latest' locally latest: Pulling from library/httpd f17d81b4b692: Pull complete 06fe09255c64: Pull complete 0baf8127507d: Pull complete 07b9730387a3: Pull complete 6dbdee9d6fa5: Pull complete Digest: sha256:90b34f4370518872de4ac1af696a90d982fe99b0f30c9be994964f49a6e2f421 Status: Downloaded newer image for httpd:latest AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message [Mon Nov 12 09:15:49.813100 2018] [mpm_event:notice] [pid 1:tid 140244084212928] AH00489: Apache/2.4.37 (Unix) configured -- resuming normal operations [Mon Nov 12 09:15:49.813536 2018] [core:notice] [pid 1:tid 140244084212928] AH00094: Command line: 'httpd -D FOREGROUND'
Now there is a lot to go through but first open a browser and head over to
Yeah, we just did that!
What we just achieved is we set up and started a simple http server locally on port 8080 within less than 25 typed characters. But what did we write exactly? Let’s analyze the command a bit closer:
docker– This states that we want to use the Docker command line interface (CLI).
run– The first actual command. It states that we want to run a command in a new container.
-p 8080:80– The publish flag. Here we declare what Docker internal port (our container) we want to publish to the host (the pc you are sitting at). the first number declares the port on the host (8080) and the second the port on the Docker container (80).
httpd– The image we want to use. This contains the actual server logic and all dependencies.
Okay, so what is an image and where does it come from? Quick answer: An image is a template that contains instructions for creating a container. Images can be hosted locally or online. Our httpd image was hosted on the Docker Hub. We will talk more about the official docker registry in the Exploring the Docker Hub part of this lesson.
The Docker CLI contains a thorough manual. So whenever you want more details about a certain command just add
--help behind the command and you will get the man page regarding the command.
Great! Now that we understand what we did we can take a look at the output.
Unable to find image 'httpd:latest' locally latest: Pulling from library/httpd f17d81b4b692: Pull complete 06fe09255c64: Pull complete 0baf8127507d: Pull complete 07b9730387a3: Pull complete 6dbdee9d6fa5: Pull complete Digest: sha256:90b34f4370518872de4ac1af696a90d982fe99b0f30c9be994964f49a6e2f421 Status: Downloaded newer image for httpd:latest
httpd image we used was not found locally so Docker automatically downloaded the image and all dependencies for us. It also provides us with a
digest for our just created container. This string starting with
sha256 can be very useful for us! Imagine that you create software that is based upon a certain image. By binding the image to this digest you make sure that you are always pulling and using the same version of the container and thus ensuring reproducibility and improving stability of your software.
While the rest of the output is internal output from our small webserver you might have noticed that the command prompt did not return to input once the container started. This is because we are currently running the container in forefront. All output that our container generates will be visible in our shell window while we are running it. You can try this by reloading the webpage of our webserver. Once the connection is reestablished the container should log something similar to this:
172.17.0.1 - - [12/Nov/2018:09:17:12 +0000] "GET / HTTP/1.1" 304 -
You might have also noticed that the ip address is not the one from your local machine. This is because Docker creates containers in their own Docker network. Explaining Docker networks is out of scope for this tutorial so I will just redirect you to the official documentation about Docker networks for the time being.
For now, stop the container and return to the command prompt by pressing
ctrl+c while the shell window is in focus.
Now that we know how to run a container it is clear that having them run in an active window isn’t always practical. Let’s start the container again but this time we will add a few things to the command:
docker run --name serverInBackground -d -p 8080:80 httpd
When you run the command you will notice two things: First the command will execute way faster then the first time. This is because the image that we are using was already downloaded the last time and is currently hosted locally on our machine. Second, there is no output anymore besides a strange string of characters. This string is the ID of our container. It can be used to refer to its running instance.
So what are those two new flags?
--name– This is a simple one. It attaches a human readable name to our container instance. While the container ID is nice to work with on a deeper level, attaching an actual name to it makes it easier to distinguish between running containers for us as human beings. Just keep in mind that IDs are unique and your attached name might not!
-d– This stands for
detachand makes our container run in the background. It also provides us with the container ID.
Sharing resources If you want to you can execute the above command with different names and ports as many times as you wish. While you can have multiple containers running httpd they will all be sharing the same image. No need to download or copy what already have on your host.
So now that we started our container make sure that it is actually running. Last time we opened our browser and accessed the webpage hosted on the server. This time let’s take another approach. Type the following in the command prompt:
The output should look something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 018acb9dbbbd httpd "httpd-foreground" 11 minutes ago Up 11 minutes 0.0.0.0:8080->80/tcp serverInBackground
ps– The ps command prints all running container instances including information about ID, image, ports and even name. The output can be filtered by adding flags to the command. To learn more just type
docker ps --help.
Another important ability is to get low level information about the configuration of a certain container. You can get these information by typing:
docker inspect serverInBackground
Notice that is doesn’t matter if you use the attached name or the container ID. Both will give you the same result.
The output of this command is huge and includes everything from information about the image itself to network configuration.
You can execute the same command using a image id to inspect the template configuration of the image.
To learn more about inspecting docker containers please refer to the official documentation .
We can even go in deeper and interact with the internals of the container. Say we want to try changes to our running container without having to shut it down and restart it every time. So how do we approach this?
Like a lot of Docker images, httpd is based upon a Linux image itself. In this case httpd is running a slim version of Debian in the background. So being a Linux system we can access a shell inside the container. This gives us a working environment that we are already familiar with. Let’s jump in and try it:
docker exec -it -u 0 serverInBackground bash
There are a few new things to talk about:
exec– This allows us to execute a command inside a running container.
-it– These are actually two flags.
-i -twould have the same result. While i stands for interactive (we need it to be interactive if we want to use the shell) t stands for TTY and creates a pseudo version of the Teletype Terminal. A simple text based terminal.
-u 0– This flag specifies the UID of the user we want to log in as. 0 opens the connection as root user.
serverInBackground– The container name (or ID) that we want the command to run in.
bash– At the end we define what we actually want to run in the container. In our case this is the bash environment. Notice that bash is installed in this image. This might not always be the case! To be safe you can add
bash. This will default back to a very stripped down shell environment by default.
When you execute the command you will see a new shell inside the container. Try moving around in the container and use commands you are familiar with. You will notice that you are missing a lot of capabilities. This has to be expected on a distribution that is supposed to be as small as possible. Thankfully httpd includes the
apt packaging manager so you can expend the capabilities. When you are done, You can exit the shell again by typing
Sometimes something inside your containers just won’t work and you can’t find out why by blindly stepping through your configuration. This is where the Docker logs come in.
To see logs from a running container just type this:
docker logs serverInBackground -f --tail 10
Once again there are is a new command and a few new flags for us:
logs– This command fetches the logs printed by a specific container.
-f– Follow the log output. This is very handy for debugging. With this flag you get a real time update of the container logs while they happen.
--tail– Chances are your container is running for days if not months. Printing all the logs is rarely necessary if not even bad practice. By using the
tailflag you can specify the amount of lines to be printed from the bottom of the file.
You can quit the log session by pressing
ctrl+c while the shell is in focus.
If you have to shutdown a running container the most graceful way is to stop it. The command is pretty straight forward:
docker stop serverInBackground
This will try to shutdown the container and kill it, if it is not responding. Keep in mind that the stopped container is not gone! You can restart the container by simply writing
docker start serverInBackground
Sometimes if something went really wrong, your only choice is to take down a container as quickly as possible.
docker kill serverInBackground
Even though this will get the job done, killing a container might lead to unwanted side effects due to not shutting it down correctly.
As we already mentioned, stopping a container does not remove it. To show that a stopped container is still managed in the background just type the following:
docker container ls -a
container– This accesses the container interaction.
ls– Outputs a list of containers according to the filters supplied.
-a– Outputs all containers, even those not running.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ee437314785f httpd "httpd-foreground" About a minute ago Exited (0) 8 seconds ago serverInBackground
As you can see even though we stopped the container it is still there. To get rid of it we have to remove it.
Just run this command:
docker rm serverInBackground
When you now run
docker container ls -a again you will notice that the container tagged
serverInBackground is gone. Keep in mind that this only removes the stopped container! The image you used to create the container will still be there.
The time might come in which you don’t need a certain image anymore. You can remove a image the same way you remove a container. To get the id of the image you want to remove you can run the
docker image ls command from earlier. Once you know what you want to remove type the following command:
docker rmi <IMAGE-ID>
This will the image if it is not needed anymore by running docker instances.
You might have asked yourself where this mysterious httpd image comes from or how I know which Linux distro it is based on. Every image you use has to be hosted somewhere. This can either be done locally on your machine or a dedicated repository in your company or even online through a hosting service. The official Docker Hub is one of those repositories. Head over to the Docker Hub and take a moment to browse the site. When creating your own containers it is always a good idea not to reinvent the wheel. There are thousands of images out there spreading from small web servers (like our httpd image) to full fledged operating systems ready at your disposal. Just type a keyword in the search field at the top of the page (web server for example) and take a stroll through the offers available or just check out the httpd repo. Most of these images hosted here offer help regarding dependencies or installation. Some of them even include information about something called a Dockerfile..
While creating containers from the command line is pretty straight forward, there are certain situations in which you don’t want to configure these containers by hand. Luckily enough we have another option, the Dockerfile. If you have already taken a look at the example files provided for httpd you might have an idea about what you can expect.
So go ahead and create a new file called ‚Dockerfile‘ (mind the capitalization). We will add some content to this file:
FROM httpd:2.4 COPY ./html/ /usr/local/apache2/htdocs/
This is a very barebone Dockerfile. It basically just says two things:
FROM– Use the provided image with the specified version for this container.
COPY– Copy the content from the first path on the host machine to the second path in the container.
So what the Dockerfile currently says is: Use the image known as httpd in version 2.4, copy all the files from the sub folder ‚./html‘ to ‚/usr/local/apache2/htdocs/‘ and create a new image containing all my changes.
For extra credit: Remember the digest from before? You can use the digest to pin our new image to the httpd image version we used in the beginning. The syntax for this is:
Now, it would be nice to have something that can actually be copied over. Create a folder called
html and create a small index.html file in there. If you don’t feel like writing one on your own just use mine:
<!DOCTYPE html> <html> <body> <h1>That's one small step for the web,</h1> <p>one giant leap for containerization.</p> </body> </html>
Open a shell window in the exact location where you placed your Dockerfile and html folder and type the following command:
docker build . -t my-new-server-image
build– The command for building images from Dockerfiles
buildcommand expects a path as second parameter. The dot refers to the current location of the shell prompt.
-t– The tag flag sets a name for the image that it can be referred by.
The shell output should look like this:
Sending build context to Docker daemon 3.584kB Step 1/2 : FROM httpd:2.4 ---> 55a118e2a010 Step 2/2 : COPY ./html/ /usr/local/apache2/htdocs/ ---> Using cache ---> 867a4993670a Successfully built 867a4993670a Successfully tagged my-new-server-image:latest
You can make sure that your newly created image is hosted on your local machine by running
docker image ls
This will show you all images hosted on your machine.
We can finally run our modified httpd image by simply typing:
docker run --name myModifiedServer -d -p 8080:80 my-new-server-image
This command should look familiar by now. The only thing we changed is that we are not using the httpd image anymore. Instead we are referring to our newly created ‚my-new-server-image‘.
Let’s see if everything is working by opening the Server in a browser.
I think it is time for us to pat ourselves on the back. We did good today!
By the time you reached these lines you should be able to create, monitor and remove containers from preexisting images as well as create new ones using Dockerfiles. You should also have a basic understanding of how to inspect and debug running containers.
As was to be expected from a basic lesson there is still a lot to cover. A good place to start is the Docker documentation itself. Another topic we didn’t even touch is Docker Compose, which provides an elegant way to orchestrate groups of containers.