Falco ecosystem

Falco’s rich ecosystem of plugins and integrations with the cloud native stack will help you enhance your organization’s security posture. This page showcases plugins and integrations, as well as success stories from end users, and vendors whose products build on Falco.

Falco’s capabilities to ingest and analyze events can be extended with Plugins. They are shared libraries that allow you to add new streams of events as inputs to Falco and to enrich your events with more contextual information.

You can connect Falco with your ecosystem by forwarding the events as output to 50+ targets with Falcosidekick.

gVisor

Falco can be used out of the box with gVisor sandboxes, more info here.

falco-gvisor

Helm

You can smoothly install Falco and its ecosystem components in your Kubernetes clusters with our official Helm charts, more info here.

helm

Many vendors use Falco as part of their product to offer fully managed security services.

Falco elevates threat detection and response in companies and organizations around the world.

Your logo could be here

Share you story

Also used by

Falco FAQs

Here are the system call event types and args supported by the Falco drivers.

By default and for performance reasons, Falco will only consider a subset of them, indicated in the first column of the same table. However, it's possible to make Falco consider all events by using the -A command line switch.

This doesn’t make Falco cover all possible threats automatically. Without the proper rules in place, many of those events will be seen as regular behavior between the processes and the kernel.

No, the k8s set of fields k8s.ns.name and k8s.pod.* (i.e., k8s.pod.name, k8s.pod.id, k8s.pod.labels, and k8s.pod.label.*) are populated with data fetched from the container runtime.

Therefore, they can also be accessed without having the Kubernetes Metadata Enrichment functionality enabled (-k Falco option).

The performance overhead of Falco can have a large variability and typically scales up and down in relation to the amount of load of the server or VM and the workload footprint (e.g. network heavy servers likely cause Falco to consume significantly more CPU).

This is because Falco hooks into kernel syscall tracepoints and the more syscalls invocations occur the more work has to be done, that is, parsing the event in the kernel, sending it to userspace over a ring buffer, parsing in userspace and applying Falco's rule filters. This fact also makes it hard to derive stable performance metrics, as CPU and memory will fluctuate with the workloads it is monitoring.

Options available to tune performance

  • Some syscalls are more high-volume than others, perform a cost-benefit analysis according to your organization's threat model and security posture. The list of syscalls that are activated is one of the most significant factors that drive CPU utilization. In addition, there are tricks to craft Falco rules more effectively.
  • Contact your organization's SREs and conduct performance tests in your environment early on in order to derive budgets and appropriate limits (CPU and memory used). We recommend to always run Falco in cgroups to also not starve the tool on the flip side.
  • Memory: Falco allocates a ring buffer for each CPU, the more CPUs you have the more memory is allocated. For high load servers you may even need to increase the size of each buffer to avoid kernel side syscall drops. In addition, Falco builds up process threads state over time and memory increases as a consequence, but at some point should plateau.

Lastly, while the Falco community is constantly improving and optimizing the tool and exposing more settings and options in falco.yaml to customize the deployment, there are factors that are out of reach. Concrete examples include the fact that kernel settings alone or the hardware type can have tremendous impacts on the tool performance even when all else is constant.