
Falco Driverkit with Docker on Debian
We use different technologies on a daily basis. Tools like Vagrant, Terraform, Ansible, and many more allow us to create and destroy digital resources in a matter of minutes, if not seconds. However, if you keep changing your running environment, you might also need to calibrate your workloads to these new changes. This is especially true when you deploy applications tightly dependent on the operating system.
In other words, every time you deploy an application like Falco there's a chance that you need to compile a new module or eBPF probe to get along with the current underlying kernel. This is the first of a series of posts where you will learn some interesting techniques related to how Falco generates the much needed driver and how you can make it available for your deployments.
Falco on Docker
There are many ways to run Falco: as a service, as a local container, as a Pod in Kubernetes, etc. Either way, if what we want to do is use Falco to detect threats based on syscalls, we will need a driver that has been compiled for the specific kernel running on the machine, be it a physical machine, a virtual one, or a Kubernetes node in the cloud.
Launching Falco as a container
The Falco image embeds a script, /usr/bin/falco-driver-loader
, that will automatically try to find and download a kernel module or an eBPF probe. If that wasn't possible, it might try to compile it inside the container itself. We will learn a bit more about this process and how to control it.
Here is the output of a fresh instance of falco
running on our local docker service:
There are some important reads from this output:
- The driver version this image tries to load is the
2.0.0+driver
. This information will be really useful when we need to compile and share the driver with the falco container. - By default, the container will look for a kernel module. It is possible to switch to an eBPF probe by using an environment variable, as you'll see later in this post.
- The
falco-driver-loader
script always removes the driver from memory and tries to load a current one. This is done for security reasons and the way to avoid that is not running this script when creating the container. More on this later, too. - After looking in the system for a previously installed driver, the script tries to download it from the URL
https://download.falco.org
. Unfortunately, it doesn't seem to be able to find it and falls back to the local compilation method. - When the script tries to compile the driver inside the container, it doesn't succeed because we haven't fulfilled one important prerequisite: installing the kernel headers on the host machine. In this post, we won't address that method but you can always refer to the documentation.
Using Falco Driverkit
As mentioned, there are different ways to obtain a valid kernel: downloading it from https://download.falco.org
, compiling it via the falco-driver-loader
script, or the method we'll explain here: using driverkit
.
We don't intend this post to be an exhaustive guide to driverkit
. That's also why we've chosen a relatively easy and tested target operating system: Debian.
First of all, we need the driverkit
tool which we'll compile ourselves. We can download the source code from https://github.com/falcosecurity/driverkit
.
When compiling a tool, we like using a temporary container. In this case, we'll start our container using the docker.io/golang:1.19
image and a sleep
process until we're done. The ./driverkit
directory will help us to extract the binary to the host filesystem. Feel free to use any other method you prefer, like docker cp
.
Check that the container has been successfully created and still runs:
Next, create a shell with a terminal in the container:
Remember, you are in the container context now. Whatever you do here will be lost unless you copy it to the /export
directory. We will clone the driverkit
code and compile it using the following commands:
Once we are done with the Golang container, we can stop it and it'll be automatically removed thanks to the --rm
parameter that we used to start it.
Creating a configuration file for the driver request
Time to create a configuration file. Do you remember the driver version: 2.0.0+driver
? We will use that and additional information to create the configuration file.
The resulting file should look like this:
In case you want to use a version previous to Falco 0.32.2 you might need to remove the
x86_64/
string from the probe path. This is due to the expected path inside thefalco-driver-loader
script. These paths will be offered via an HTTP server at a later stage, so make sure they match in both steps.
This same file is the one we will pass to driverkit
. If the driver is compiled satisfactorily, we should see a similar output in some seconds. Be patient.
Make sure you use either the .yml or .yaml suffix. Otherwise, you'll get an error like:
Alternatively, we could have used a bunch of parameters in the command line, like:
Either way, if driverkit
manages to compile the drivers, you can continue with the next step. Otherwise, you might need to adjust some of the parameters in the configuration or even customize your builder image, but we will explain that in a different post where we will deep dive into driverkit
.
Launching Falco with the new driver
There are different ways to load the driver when running Falco. We'll show you two of them: loading them manually and leaving this action to the script embedded in the container image.
Loading the driver manually
A kernel module only needs to be loaded once. So, if we load it manually before starting the container, Falco doesn't need to do it again.
There are two ways of achieving that, and both require avoiding the execution of the falco-driver-loader
script:
- Setting the SKIP_DRIVER_LOADER environment variable to any value when creating the container. By doing so, the container entrypoint will skip the existing
falco-driver-loader
script. - Using the image
docker.io/falco/falco-no-driver
, which doesn't contain that script.
First, try to load the driver on the host. Look for the .ko
file in the directory structure we created and load it using insmod
, for instance. If the compilation was successful and the kernel version chosen was the right one, you shouldn't see any message once the module is loaded. Don't forget to do it with the user root (i.e., via sudo
).
This first method of starting the falco
container will use the docker.io/falco/falco:0.32.2
image, passing the SKIP_DRIVER_LOADER
variable. We've set it to one but the script doesn't check its value.
Observe that we're removing any existing container with that name before starting ours, but the container image remains.
The second method uses the docker.io/falco/falco-no-driver
image, so, as you can expect, it won't try to reload the driver this time.
This time, Docker will pull the image since we hadn't used it yet.
Sharing the probe and driver with the Falco container
This method is a bit more complicated than the previous one but will also give us the flexibility of deploying falco
at scale.
The idea is simple though. After starting your favorite webserver and publishing the ./drivers
directory that we created before, we'll tell the falco
container to use it as a repository and download the driver from there.
To keep things clean, we've used the docker.io/python:latest
container image, which includes the Python module http.server
. If you have Python already installed on your system, you can use it directly. Just remember to define a port accessible to the falco
container and pass the right IP address.
Our python web server is now available and offers the drivers to any local container that might need them. Retrieve the IP address of this container for later use:
It's always a good practice to test that the drivers are in the right place and accessible through the webserver.
As you can see, they are accessible and identical.
These values will be different depending on the version of the kernel and the Falco drivers.
Loading the kernel module
Let's start with the kernel module. In this case, the only variable we need to pass is the DRIVERS_REPO
one, which has been carefully prepared in the previous step.
It's a similar output as before, but this time we can see:
* Trying to download a prebuilt falco module from 172.17.0.2:8000/2.0.0%2Bdriver/x86_64/falco_debian_5.10.0-14-cloud-amd64_1.ko
* Download succeeded
* Success: falco module found and inserted
The module has been successfully loaded and Falco can start properly.
Loading the eBPF Probe
For this, we will make use of another variable, FALCO_BPF_PROBE
. Like it happened with the SKIP_DRIVER_LOADER
variable, its value is not as relevant as the fact that the variable had been defined. We also need to keep the DRIVERS_REPO
variable, since the falco-driver-loader
script will look for the probe in that URL.
This time the output is easier to read: The driver is set to bpf, the URL of the HTTP container points to our local webserver, and it also shows where it downloads the probe before starting Falco.
Debugging
As a final tip, if you want to start a container based on the regular falco
image to test the falco-driver-loader
script, we recommend starting the container with the --entrypoint /bin/bash
parameter. This will keep the /docker-entrypoint.sh
script from being executed (that one triggers /usr/bin/falco-driver-loader
) and you'll have a much more comfortable environment to work with.
Conclusion
Falco requires tapping into the kernel to be able to retrieve useful information from it. For that, it has two methods: loading a kernel module in the traditional way, or using an eBPF probe. Both of them instrumentalize the kernel and provide the functionality to retrieve the relevant data.
Due to the infinite number of combinations of Linux kernels and distributions, it is extremely difficult to offer all possible kernels as downloadable assets. Besides, in some environments, it'll be a requirement to compile the driver of such a critical component. Learning how to use Falco Driverkit will help you to easily deploy Falco in more environments.