From 260b96167cfca0d84a3757ecc077b8114b96e0e9 Mon Sep 17 00:00:00 2001 From: Henri DF Date: Tue, 17 May 2016 13:33:57 -0700 Subject: [PATCH 01/20] README: Minor format changes, remove tagline --- README.md | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/README.md b/README.md index 807e6afb5dd..6cae418929e 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,6 @@ # Sysdig Falco -### *Host Activity Monitoring using Sysdig Event Filtering* -**Table of Contents** +####Table of Contents - [Overview](#overview) - [Rules](#rules) From 38caea4388afbb4448ae45a2bbd27ed8140b875e Mon Sep 17 00:00:00 2001 From: Henri DF Date: Tue, 17 May 2016 13:37:52 -0700 Subject: [PATCH 02/20] README: add "latest release" section --- README.md | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/README.md b/README.md index 6cae418929e..7398ac418d5 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,16 @@ # Sysdig Falco +####Latest release + +**v0.1.0** +Read the [change log](https://github.com/draios/falco/blob/dev/CHANGELOG.md) + +This is the initial falco release. Note that much of falco's code comes from +[sysdig](https://github.com/draios/sysdig), so overall stability is very good +for an early release. On the other hand performance is still a work in +progress. On busy hosts and/or with large rule sets, you may see the current +version of falco using high CPU. Expect big improvements in coming releases. + ####Table of Contents - [Overview](#overview) From 5fe663e62ae477504ee644606bf7599eeb8fff69 Mon Sep 17 00:00:00 2001 From: Henri DF Date: Tue, 17 May 2016 13:41:57 -0700 Subject: [PATCH 03/20] readme: lowercase falco --- README.md | 46 +++++++++++++++++++++++----------------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/README.md b/README.md index 7398ac418d5..e582da56d69 100644 --- a/README.md +++ b/README.md @@ -26,7 +26,7 @@ Sysdig Falco is a behavioral activity monitor designed to secure your applicatio #### What kind of behaviors can Falco detect? -Falco can detect and alert on any behavior that involves making Linux system calls. Thanks to Sysdig's core decoding and state tracking functionality, Falco alerts can be triggered by the use of specific system calls, their arguments, and by properties of the calling process. For example, you can easily detect things like: +Falco can detect and alert on any behavior that involves making Linux system calls. Thanks to Sysdig's core decoding and state tracking functionality, falco alerts can be triggered by the use of specific system calls, their arguments, and by properties of the calling process. For example, you can easily detect things like: - A shell is run inside a container - A server process spawns a child process of an unexpected type - Unexpected read of a sensitive file (like `/etc/passwd`) @@ -45,12 +45,12 @@ high-level, human-readable language. We've provided a sample rule file `./rules/falco_rules.yaml` as a starting point - you can (and will likely want!) to adapt it to your environment. -When developing rules, one helpful feature is Falco's ability to read trace +When developing rules, one helpful feature is falco's ability to read trace files saved by sysdig. This allows you to "record" the offending behavior -once, and replay it with Falco as many times as needed while tweaking your +once, and replay it with falco as many times as needed while tweaking your rules. -Once deployed, Falco uses the Sysdig kernel module and userspace libraries to +Once deployed, falco uses the Sysdig kernel module and userspace libraries to watch for any events matching one of the conditions defined in the rule file. If a matching event occurs, a notification is written to the the configured output(s). @@ -60,18 +60,18 @@ configured output(s). _Call for contributions: If you come up with additional rules which you'd like to see in the core repository - PR welcome!_ -A Falco rules file is comprised of two kinds of elements: rules and macro definitions. Macros are simply definitions that can be re-used inside rules and other macros, providing a way to factor out and name common patterns. +A falco rules file is comprised of two kinds of elements: rules and macro definitions. Macros are simply definitions that can be re-used inside rules and other macros, providing a way to factor out and name common patterns. #### Conditions The key part of a rule is the _condition_ field. A condition is simply a boolean predicate on sysdig events. -Conditions are expressed using the Sysdig [filter syntax](http://www.sysdig.org/wiki/sysdig-user-guide/#filtering). Any Sysdig filter is a valid Falco condition (with the caveat of certain excluded system calls, discussed below). In addition, Falco expressions can contain _macro_ terms, which are not present in Sysdig syntax. +Conditions are expressed using the Sysdig [filter syntax](http://www.sysdig.org/wiki/sysdig-user-guide/#filtering). Any Sysdig filter is a valid falco condition (with the caveat of certain excluded system calls, discussed below). In addition, falco expressions can contain _macro_ terms, which are not present in Sysdig syntax. Here's an example of a condition that alerts whenever a bash shell is run inside a container: `container.id != host and proc.name = bash` -The first clause checks that the event happened in a container (sysdig events have a `container` field that is equal to "host" if the event happened on a regular host). The second clause checks that the process name is `bash`. Note that this condition does not even include a clause with system call! It only uses event metadata. As such, if a bash shell does start up in a container, Falco will output events for every syscall that is done by that shell. +The first clause checks that the event happened in a container (sysdig events have a `container` field that is equal to "host" if the event happened on a regular host). The second clause checks that the process name is `bash`. Note that this condition does not even include a clause with system call! It only uses event metadata. As such, if a bash shell does start up in a container, falco will output events for every syscall that is done by that shell. _Tip: If you're new to sysdig and unsure what fields are available, run `sysdig -l` to see the list of supported fields._ @@ -106,7 +106,7 @@ For many more examples of rules and macros, please take a look at the accompanyi #### Ignored system calls -For performance reasons, some system calls are currently discarded before Falco processing. The current list is: +For performance reasons, some system calls are currently discarded before falco processing. The current list is: `clock_getres,clock_gettime,clock_nanosleep,clock_settime,close,epoll_create,epoll_create1,epoll_ctl,epoll_pwait,epoll_wait,eventfd,fcntl,fcntl64,fstat,fstat64,fstatat64,fstatfs,fstatfs64,futex,getitimer,gettimeofday,ioprio_get,ioprio_set,llseek,lseek,lstat,lstat64,mmap,mmap2,munmap,nanosleep,poll,ppoll,pread64,preadv,procinfo,pselect6,pwrite64,pwritev,read,readv,recv,recvfrom,recvmmsg,recvmsg,sched_yield,select,send,sendfile,sendfile64,sendmmsg,sendmsg,sendto,setitimer,settimeofday,shutdown,splice,stat,stat64,statfs,statfs64,switch,tee,timer_create,timer_delete,timerfd_create,timerfd_gettime,timerfd_settime,timer_getoverrun,timer_gettime,timer_settime,wait4,write,writev` @@ -120,7 +120,7 @@ configuration options. ## Installation #### Scripted install -To install Falco automatically in one step, simply run the following command as root or with sudo: +To install falco automatically in one step, simply run the following command as root or with sudo: `curl -s https://s3.amazonaws.com/download.draios.com/stable/install-falco | sudo bash` @@ -145,7 +145,7 @@ Warning: The following command might not work with any kernel. Make sure to cust `yum -y install kernel-devel-$(uname -r)` -- Install Falco +- Install falco `yum -y install falco` @@ -168,7 +168,7 @@ Warning: The following command might not work with any kernel. Make sure to cust `apt-get -y install linux-headers-$(uname -r)` -- Install Falco +- Install falco `apt-get -y install falco` @@ -177,9 +177,9 @@ To uninstall, just do `apt-get remove falco`. ##### Container install (general) -If you have full control of your host operating system, then installing Falco using the normal installation method is the recommended best practice. This method allows full visibility into all containers on the host OS. No changes to the standard automatic/manual installation procedures are required. +If you have full control of your host operating system, then installing falco using the normal installation method is the recommended best practice. This method allows full visibility into all containers on the host OS. No changes to the standard automatic/manual installation procedures are required. -However, Falco can also run inside a Docker container. To guarantee a smooth deployment, the kernel headers must be installed in the host operating system, before running Falco. +However, falco can also run inside a Docker container. To guarantee a smooth deployment, the kernel headers must be installed in the host operating system, before running Falco. This can usually be done on Debian-like distributions with: `apt-get -y install linux-headers-$(uname -r)` @@ -196,11 +196,11 @@ docker run -i -t --name falco --privileged -v /var/run/docker.sock:/host/var/run ##### Container install (CoreOS) -The recommended way to run Falco on CoreOS is inside of its own Docker container using the install commands in the paragraph above. This method allows full visibility into all containers on the host OS. +The recommended way to run falco on CoreOS is inside of its own Docker container using the install commands in the paragraph above. This method allows full visibility into all containers on the host OS. This method is automatically updated, includes some nice features such as automatic setup and bash completion, and is a generic approach that can be used on other distributions outside CoreOS as well. -However, some users may prefer to run Falco in the CoreOS toolbox. While not the recommended method, this can be achieved by installing Falco inside the toolbox using the normal installation method, and then manually running the sysdig-probe-loader script: +However, some users may prefer to run falco in the CoreOS toolbox. While not the recommended method, this can be achieved by installing Falco inside the toolbox using the normal installation method, and then manually running the sysdig-probe-loader script: ``` toolbox --bind=/dev --bind=/var/run/docker.sock @@ -214,24 +214,24 @@ sysdig-probe-loader Falco is intended to be run as a service. But for experimentation and designing/testing rulesets, you will likely want to run it manually from the command-line. -#### Running Falco as a service (after installing package) +#### Running falco as a service (after installing package) `service falco start` -#### Running Falco in a container +#### Running falco in a container `docker run -i -t --name falco --privileged -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro sysdig/falco` -#### Running Falco manually +#### Running falco manually Do `falco --help` to see the command-line options available when running manually. -## Building and running Falco locally from source -Building Falco requires having `cmake` and `g++` installed. +## Building and running falco locally from source +Building falco requires having `cmake` and `g++` installed. -#### Building Falco +#### Building falco Clone this repo in a directory that also contains the sysdig source repo. The result should be something like: ``` @@ -268,9 +268,9 @@ To load the locally built version, assuming you are in the `build` dir, use: `$ insmod driver/sysdig-probe.ko` -#### Running Falco +#### Running falco -Assuming you are in the `build` dir, you can run Falco as: +Assuming you are in the `build` dir, you can run falco as: `$ sudo ./userspace/falco/falco -c ../falco.yaml -r ../rules/falco_rules.yaml` From c9d2550ecd3597b46791f2ae0f4a23aae50d5233 Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Wed, 11 May 2016 10:44:32 -0700 Subject: [PATCH 04/20] Add minimal travis support. Add minimal travis.yml file that builds and packages falco. No actual tests yet. --- .travis.yml | 38 ++++++++++++++++++++++++++++++++++ test/falco_trace_regression.sh | 26 +++++++++++++++++++++++ 2 files changed, 64 insertions(+) create mode 100644 .travis.yml create mode 100755 test/falco_trace_regression.sh diff --git a/.travis.yml b/.travis.yml new file mode 100644 index 00000000000..a602ce5353b --- /dev/null +++ b/.travis.yml @@ -0,0 +1,38 @@ +language: c +env: + - BUILD_TYPE=Debug + - BUILD_TYPE=Release +before_install: + - sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test + - sudo apt-get update +install: + - sudo apt-get --force-yes install g++-4.8 + - sudo apt-get install rpm linux-headers-$(uname -r) + - git clone https://github.com/draios/sysdig.git ../sysdig +before_script: + - export KERNELDIR=/lib/modules/$(ls /lib/modules | sort | head -1)/build +script: + - set -e + - export CC="gcc-4.8" + - export CXX="g++-4.8" + - wget https://s3.amazonaws.com/download.draios.com/dependencies/cmake-3.3.2.tar.gz + - tar -xzf cmake-3.3.2.tar.gz + - cd cmake-3.3.2 + - ./bootstrap --prefix=/usr + - make + - sudo make install + - cd .. + - mkdir build + - cd build + - cmake .. -DCMAKE_BUILD_TYPE=$BUILD_TYPE + - make VERBOSE=1 + - make package +# - cd .. +# - test/falco_trace_regression.sh build/userspace/falco/falco +notifications: + webhooks: + urls: +# - https://webhooks.gitter.im/e/fdbc2356fb0ea2f15033 + on_success: change + on_failure: always + on_start: never \ No newline at end of file diff --git a/test/falco_trace_regression.sh b/test/falco_trace_regression.sh new file mode 100755 index 00000000000..928c06f3b47 --- /dev/null +++ b/test/falco_trace_regression.sh @@ -0,0 +1,26 @@ +#!/bin/bash +set -eu + +SCRIPT=$(readlink -f $0) +BASEDIR=$(dirname $SCRIPT) + +FALCO=$1 + +# For now, simply ensure that falco can run without errors. +FALCO_CMDLINE="$FALCO -c $BASEDIR/../falco.yaml -r $BASEDIR/../rules/falco_rules.yaml" +echo "Running falco: $FALCO_CMDLINE" +$FALCO_CMDLINE > $BASEDIR/falco.log 2>&1 & +FALCO_PID=$! +echo "Falco started, pid $FALCO_PID" +sleep 10 +if kill -0 $FALCO_PID > /dev/null 2>&1; then + echo "Falco ran successfully" + kill $FALCO_PID + ret=0 +else + echo "Falco did not start successfully. Full program output:" + cat $BASEDIR/falco.log + ret=1 +fi + +exit $ret From 467fe33e37d9735828b2462dbba53fadea406dcf Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Tue, 17 May 2016 16:07:40 -0700 Subject: [PATCH 05/20] Add travis badges. Showing both dev/master branches for now. --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index e582da56d69..b98aaf02ecb 100644 --- a/README.md +++ b/README.md @@ -11,6 +11,9 @@ for an early release. On the other hand performance is still a work in progress. On busy hosts and/or with large rule sets, you may see the current version of falco using high CPU. Expect big improvements in coming releases. +Dev Branch: [![Build Status](https://travis-ci.org/draios/falco.svg?branch=dev)](https://travis-ci.org/draios/falco)
+Master Branch: [![Build Status](https://travis-ci.org/draios/falco.svg?branch=master)](https://travis-ci.org/draios/falco) + ####Table of Contents - [Overview](#overview) From 450c347ef38349aaf512dce62fc035bce16680f2 Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Tue, 17 May 2016 16:26:05 -0700 Subject: [PATCH 06/20] Add a basic test to run falco. Add a basic test that loads the kernel module from the source directory and runs falco. No testing of behavior yet. --- .travis.yml | 4 ++-- test/falco_trace_regression.sh | 4 ++++ 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/.travis.yml b/.travis.yml index a602ce5353b..d49533715c4 100644 --- a/.travis.yml +++ b/.travis.yml @@ -27,8 +27,8 @@ script: - cmake .. -DCMAKE_BUILD_TYPE=$BUILD_TYPE - make VERBOSE=1 - make package -# - cd .. -# - test/falco_trace_regression.sh build/userspace/falco/falco + - cd .. + - sudo test/falco_trace_regression.sh build/userspace/falco/falco notifications: webhooks: urls: diff --git a/test/falco_trace_regression.sh b/test/falco_trace_regression.sh index 928c06f3b47..a4b3498c7b2 100755 --- a/test/falco_trace_regression.sh +++ b/test/falco_trace_regression.sh @@ -5,6 +5,10 @@ SCRIPT=$(readlink -f $0) BASEDIR=$(dirname $SCRIPT) FALCO=$1 +BUILDDIR=$(dirname $FALCO) + +# Load the built kernel module by hand +insmod $BUILDDIR/../../driver/sysdig-probe.ko # For now, simply ensure that falco can run without errors. FALCO_CMDLINE="$FALCO -c $BASEDIR/../falco.yaml -r $BASEDIR/../rules/falco_rules.yaml" From 22dce619744ee68999cc616ef18d2080b8e2d4e7 Mon Sep 17 00:00:00 2001 From: Henri DF Date: Wed, 18 May 2016 09:32:04 -0700 Subject: [PATCH 07/20] Readme.md: overview tweaks --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index b98aaf02ecb..9520e19f9a6 100644 --- a/README.md +++ b/README.md @@ -24,7 +24,7 @@ Master Branch: [![Build Status](https://travis-ci.org/draios/falco.svg?branch=ma ## Overview -Sysdig Falco is a behavioral activity monitor designed to secure your applications. Powered by Sysdig’s universal system level visibility, write simple and powerful rules, and then output warnings in the format you need. Continuously monitor and detect container, application, host, and network activity... all in one place, from one source of data, with one set of rules. +Sysdig Falco is a behavioral activity monitor designed to detect anomalous activity in your applications. Powered by sysdig’s system call capture infrastructure, falco lets you continuously monitor and detect container, application, host, and network activity... all in one place, from one source of data, with one set of rules. #### What kind of behaviors can Falco detect? From 2237532ff0701a0907640663e120fd526187047e Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Mon, 23 May 2016 17:20:15 -0700 Subject: [PATCH 08/20] Quote path variables that may contain spaces. Make sure that references to variables that may be paths (which in turn may contain spaces) are quoted, so cmake won't break on the spaces. This fixes https://github.com/draios/falco/issues/79. --- CMakeLists.txt | 10 +++++----- scripts/CMakeLists.txt | 8 ++++---- userspace/falco/CMakeLists.txt | 8 ++++---- 3 files changed, 13 insertions(+), 13 deletions(-) diff --git a/CMakeLists.txt b/CMakeLists.txt index 539ce783b87..a8ade60ce00 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -42,7 +42,7 @@ set(PROBE_DEVICE_NAME "sysdig") set(CMD_MAKE make) -set(SYSDIG_DIR ${PROJECT_SOURCE_DIR}/../sysdig) +set(SYSDIG_DIR "${PROJECT_SOURCE_DIR}/../sysdig") include(ExternalProject) @@ -151,7 +151,7 @@ ExternalProject_Add(lpeg DEPENDS luajit URL "http://s3.amazonaws.com/download.draios.com/dependencies/lpeg-1.0.0.tar.gz" URL_MD5 "0aec64ccd13996202ad0c099e2877ece" - BUILD_COMMAND LUA_INCLUDE=${LUAJIT_INCLUDE} ${PROJECT_SOURCE_DIR}/scripts/build-lpeg.sh + BUILD_COMMAND LUA_INCLUDE=${LUAJIT_INCLUDE} "${PROJECT_SOURCE_DIR}/scripts/build-lpeg.sh" BUILD_IN_SOURCE 1 CONFIGURE_COMMAND "" INSTALL_COMMAND "") @@ -180,9 +180,9 @@ ExternalProject_Add(lyaml install(FILES falco.yaml DESTINATION "${DIR_ETC}") -add_subdirectory(${SYSDIG_DIR}/driver ${PROJECT_BINARY_DIR}/driver) -add_subdirectory(${SYSDIG_DIR}/userspace/libscap ${PROJECT_BINARY_DIR}/userspace/libscap) -add_subdirectory(${SYSDIG_DIR}/userspace/libsinsp ${PROJECT_BINARY_DIR}/userspace/libsinsp) +add_subdirectory("${SYSDIG_DIR}/driver" "${PROJECT_BINARY_DIR}/driver") +add_subdirectory("${SYSDIG_DIR}/userspace/libscap" "${PROJECT_BINARY_DIR}/userspace/libscap") +add_subdirectory("${SYSDIG_DIR}/userspace/libsinsp" "${PROJECT_BINARY_DIR}/userspace/libsinsp") add_subdirectory(rules) add_subdirectory(scripts) diff --git a/scripts/CMakeLists.txt b/scripts/CMakeLists.txt index f9084aeef94..b8807b81695 100644 --- a/scripts/CMakeLists.txt +++ b/scripts/CMakeLists.txt @@ -1,5 +1,5 @@ -file(COPY ${PROJECT_SOURCE_DIR}/scripts/debian/falco - DESTINATION ${PROJECT_BINARY_DIR}/scripts/debian) +file(COPY "${PROJECT_SOURCE_DIR}/scripts/debian/falco" + DESTINATION "${PROJECT_BINARY_DIR}/scripts/debian") -file(COPY ${PROJECT_SOURCE_DIR}/scripts/rpm/falco - DESTINATION ${PROJECT_BINARY_DIR}/scripts/rpm) +file(COPY "${PROJECT_SOURCE_DIR}/scripts/rpm/falco" + DESTINATION "${PROJECT_BINARY_DIR}/scripts/rpm") diff --git a/userspace/falco/CMakeLists.txt b/userspace/falco/CMakeLists.txt index 6dc1fe5e7cf..fb241159e7f 100644 --- a/userspace/falco/CMakeLists.txt +++ b/userspace/falco/CMakeLists.txt @@ -1,12 +1,12 @@ -include_directories(${PROJECT_SOURCE_DIR}/../sysdig/userspace/libsinsp/third-party/jsoncpp) +include_directories("${PROJECT_SOURCE_DIR}/../sysdig/userspace/libsinsp/third-party/jsoncpp") include_directories("${LUAJIT_INCLUDE}") -include_directories(${PROJECT_SOURCE_DIR}/../sysdig/userspace/libscap) -include_directories(${PROJECT_SOURCE_DIR}/../sysdig/userspace/libsinsp) +include_directories("${PROJECT_SOURCE_DIR}/../sysdig/userspace/libscap") +include_directories("${PROJECT_SOURCE_DIR}/../sysdig/userspace/libsinsp") include_directories("${PROJECT_BINARY_DIR}/userspace/falco") include_directories("${CURL_INCLUDE_DIR}") include_directories("${YAMLCPP_INCLUDE_DIR}") -include_directories(${DRAIOS_DEPENDENCIES_DIR}/yaml-${DRAIOS_YAML_VERSION}/target/include) +include_directories("${DRAIOS_DEPENDENCIES_DIR}/yaml-${DRAIOS_YAML_VERSION}/target/include") add_executable(falco configuration.cpp formats.cpp fields.cpp rules.cpp logger.cpp falco.cpp) From 66cedc89f26fe8a6383364ddca78af4ef1b0e298 Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Mon, 23 May 2016 17:24:38 -0700 Subject: [PATCH 09/20] Don't null-check inspector. delete(NULL) is ok so don't bother protecting it. --- userspace/falco/falco.cpp | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/userspace/falco/falco.cpp b/userspace/falco/falco.cpp index 6b92058fadb..35c3a3443cc 100644 --- a/userspace/falco/falco.cpp +++ b/userspace/falco/falco.cpp @@ -494,10 +494,7 @@ int falco_init(int argc, char **argv) exit: - if(inspector) - { - delete inspector; - } + delete inspector; if(ls) { From 1a2719437f56f0eb3489027a32a123b0fb85c7d5 Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Thu, 19 May 2016 16:17:27 -0700 Subject: [PATCH 10/20] Add graceful shutdown on SIGINT/SIGTERM. Add signal handlers for SIGINT/SIGTERM that set a shutdown flag. Initialize the live inspector with a timeout so the main loop can watch the flag set by the signal handlers. --- userspace/falco/falco.cpp | 30 ++++++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) diff --git a/userspace/falco/falco.cpp b/userspace/falco/falco.cpp index 6b92058fadb..4cc8bae8515 100644 --- a/userspace/falco/falco.cpp +++ b/userspace/falco/falco.cpp @@ -28,6 +28,14 @@ extern "C" { #include "utils.h" #include +bool g_terminate = false; +// +// Helper functions +// +static void signal_callback(int signal) +{ + g_terminate = true; +} // // Program help @@ -90,7 +98,11 @@ void do_inspect(sinsp* inspector, res = inspector->next(&ev); - if(res == SCAP_TIMEOUT) + if (g_terminate) + { + break; + } + else if(res == SCAP_TIMEOUT) { continue; } @@ -398,6 +410,20 @@ int falco_init(int argc, char **argv) add_output(ls, *it); } + if(signal(SIGINT, signal_callback) == SIG_ERR) + { + fprintf(stderr, "An error occurred while setting SIGINT signal handler.\n"); + result = EXIT_FAILURE; + goto exit; + } + + if(signal(SIGTERM, signal_callback) == SIG_ERR) + { + fprintf(stderr, "An error occurred while setting SIGTERM signal handler.\n"); + result = EXIT_FAILURE; + goto exit; + } + if (scap_filename.size()) { inspector->open(scap_filename); @@ -406,7 +432,7 @@ int falco_init(int argc, char **argv) { try { - inspector->open(); + inspector->open(200); } catch(sinsp_exception e) { From a41bb0dac06e2f1db7e004ee072986d0d63b1321 Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Thu, 19 May 2016 16:19:45 -0700 Subject: [PATCH 11/20] Print stats when shutting down. At shutdown, print stats on the number of rules triggered by severity and rule name. This is done by a lua function print_stats and the associated table rule_output_counts. When passing rules to outputs, update the counts in rule_output_counts. --- userspace/falco/falco.cpp | 23 +++++++++++++++++ userspace/falco/lua/output.lua | 2 ++ userspace/falco/lua/rule_loader.lua | 39 ++++++++++++++++++++++++++++- 3 files changed, 63 insertions(+), 1 deletion(-) diff --git a/userspace/falco/falco.cpp b/userspace/falco/falco.cpp index 4cc8bae8515..01b3019c664 100644 --- a/userspace/falco/falco.cpp +++ b/userspace/falco/falco.cpp @@ -75,6 +75,7 @@ static void display_fatal_err(const string &msg, bool daemon) string lua_on_event = "on_event"; string lua_add_output = "add_output"; +string lua_print_stats = "print_stats"; // Splitting into key=value or key.subkey=value will be handled by configuration class. std::list cmdline_options; @@ -211,6 +212,26 @@ void add_output(lua_State *ls, output_config oc) } +// Print statistics on the the rules that triggered +void print_stats(lua_State *ls) +{ + lua_getglobal(ls, lua_print_stats.c_str()); + + if(lua_isfunction(ls, -1)) + { + if(lua_pcall(ls, 0, 0, 0) != 0) + { + const char* lerr = lua_tostring(ls, -1); + string err = "Error invoking function print_stats: " + string(lerr); + throw sinsp_exception(err); + } + } + else + { + throw sinsp_exception("No function " + lua_print_stats + " found in lua rule loader module"); + } + +} // // ARGUMENT PARSING AND PROGRAM SETUP @@ -504,6 +525,8 @@ int falco_init(int argc, char **argv) ls); inspector->close(); + + print_stats(ls); } catch(sinsp_exception& e) { diff --git a/userspace/falco/lua/output.lua b/userspace/falco/lua/output.lua index 78573b947c4..0bef1712ab3 100644 --- a/userspace/falco/lua/output.lua +++ b/userspace/falco/lua/output.lua @@ -2,6 +2,8 @@ local mod = {} levels = {"Emergency", "Alert", "Critical", "Error", "Warning", "Notice", "Informational", "Debug"} +mod.levels = levels + local outputs = {} function mod.stdout(evt, level, format) diff --git a/userspace/falco/lua/rule_loader.lua b/userspace/falco/lua/rule_loader.lua index f5cc888264d..0e041f7e7f3 100644 --- a/userspace/falco/lua/rule_loader.lua +++ b/userspace/falco/lua/rule_loader.lua @@ -230,12 +230,49 @@ function describe_rule(name) end end +local rule_output_counts = {by_level={}, by_name={}} + +for idx, level in ipairs(output.levels) do + rule_output_counts[level] = 0 +end + function on_event(evt_, rule_id) if state.rules_by_idx[rule_id] == nil then error ("rule_loader.on_event(): event with invalid rule_id: ", rule_id) end - output.event(evt_, state.rules_by_idx[rule_id].level, state.rules_by_idx[rule_id].output) + local rule = state.rules_by_idx[rule_id] + + if rule_output_counts.by_level[rule.level] == nil then + rule_output_counts.by_level[rule.level] = 1 + else + rule_output_counts.by_level[rule.level] = rule_output_counts.by_level[rule.level] + 1 + end + + if rule_output_counts.by_name[rule.rule] == nil then + rule_output_counts.by_name[rule.rule] = 1 + else + rule_output_counts.by_name[rule.rule] = rule_output_counts.by_name[rule.rule] + 1 + end + + output.event(evt_, rule.level, rule.output) end +function print_stats() + print("Rule counts by severity:") + for idx, level in ipairs(output.levels) do + -- To keep the output concise, we only print 0 counts for error, warning, and info levels + if rule_output_counts[level] > 0 or level == "Error" or level == "Warning" or level == "Informational" then + print (" "..level..": "..rule_output_counts[level]) + end + end + + print("Triggered rules by rule name:") + for name, count in pairs(rule_output_counts.by_name) do + print (" "..name..": "..count) + end +end + + + From 4751546c033660b35fb174af5399fc7abee8f964 Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Wed, 18 May 2016 17:08:01 -0700 Subject: [PATCH 12/20] Add correctness tests using Avocado Start using the Avocado framework for automated regression testing. Create a test FalcoTest in falco_test.py which can run on a collection of trace files. The script test/run_regression_tests.sh is responsible for pulling zip files containing the positive (falco should detect) and negative (falco should not detect) trace files, creating a Avocado multiplex file that defines all the tests (one for each trace file), running avocado on all the trace files, and showing full logs for any test that didn't pass. The old regression script, which simply ran falco, has been removed. Modify falco's stats output to show the total number of events detected for use in the tests. In travis.yml, pull a known stable version of avocado and build it, including installing any dependencies, as a part of the build process. --- .travis.yml | 10 ++++- test/falco_test.py | 63 +++++++++++++++++++++++++++++ test/falco_trace_regression.sh | 30 -------------- test/run_regression_tests.sh | 62 ++++++++++++++++++++++++++++ userspace/falco/lua/rule_loader.lua | 4 +- 5 files changed, 137 insertions(+), 32 deletions(-) create mode 100644 test/falco_test.py delete mode 100755 test/falco_trace_regression.sh create mode 100755 test/run_regression_tests.sh diff --git a/.travis.yml b/.travis.yml index d49533715c4..fe37c22f52b 100644 --- a/.travis.yml +++ b/.travis.yml @@ -9,6 +9,14 @@ install: - sudo apt-get --force-yes install g++-4.8 - sudo apt-get install rpm linux-headers-$(uname -r) - git clone https://github.com/draios/sysdig.git ../sysdig + - sudo apt-get install -y python-pip libvirt-dev jq + - cd .. + - curl -Lo avocado-36.0-tar.gz https://github.com/avocado-framework/avocado/archive/36.0lts.tar.gz + - tar -zxvf avocado-36.0-tar.gz + - cd avocado-36.0lts + - sudo pip install -r requirements-travis.txt + - sudo python setup.py install + - cd ../falco before_script: - export KERNELDIR=/lib/modules/$(ls /lib/modules | sort | head -1)/build script: @@ -28,7 +36,7 @@ script: - make VERBOSE=1 - make package - cd .. - - sudo test/falco_trace_regression.sh build/userspace/falco/falco + - sudo test/run_regression_tests.sh notifications: webhooks: urls: diff --git a/test/falco_test.py b/test/falco_test.py new file mode 100644 index 00000000000..72875c1c063 --- /dev/null +++ b/test/falco_test.py @@ -0,0 +1,63 @@ +#!/usr/bin/env python + +import os +import re + +from avocado import Test +from avocado.utils import process +from avocado.utils import linux_modules + +class FalcoTest(Test): + + def setUp(self): + """ + Load the sysdig kernel module if not already loaded. + """ + self.falcodir = self.params.get('falcodir', '/', default=os.path.join(self.basedir, '../build')) + + self.should_detect = self.params.get('detect', '*') + self.trace_file = self.params.get('trace_file', '*') + + # Doing this in 2 steps instead of simply using + # module_is_loaded to avoid logging lsmod output to the log. + lsmod_output = process.system_output("lsmod", verbose=False) + + if linux_modules.parse_lsmod_for_module(lsmod_output, 'sysdig_probe') == {}: + self.log.debug("Loading sysdig kernel module") + process.run('sudo insmod {}/driver/sysdig-probe.ko'.format(self.falcodir)) + + self.str_variant = self.trace_file + + def test(self): + self.log.info("Trace file %s", self.trace_file) + + # Run the provided trace file though falco + cmd = '{}/userspace/falco/falco -r {}/../rules/falco_rules.yaml -c {}/../falco.yaml -e {}'.format( + self.falcodir, self.falcodir, self.falcodir, self.trace_file) + + self.falco_proc = process.SubProcess(cmd) + + res = self.falco_proc.run(timeout=60, sig=9) + + if res.exit_status != 0: + self.error("Falco command \"{}\" exited with non-zero return value {}".format( + cmd, res.exit_status)) + + # Get the number of events detected. + res = re.search('Events detected: (\d+)', res.stdout) + if res is None: + self.fail("Could not find a line 'Events detected: ' in falco output") + + events_detected = int(res.group(1)) + + if not self.should_detect and events_detected > 0: + self.fail("Detected {} events when should have detected none".format(events_detected)) + + if self.should_detect and events_detected == 0: + self.fail("Detected {} events when should have detected > 0".format(events_detected)) + + pass + + +if __name__ == "__main__": + main() diff --git a/test/falco_trace_regression.sh b/test/falco_trace_regression.sh deleted file mode 100755 index a4b3498c7b2..00000000000 --- a/test/falco_trace_regression.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash -set -eu - -SCRIPT=$(readlink -f $0) -BASEDIR=$(dirname $SCRIPT) - -FALCO=$1 -BUILDDIR=$(dirname $FALCO) - -# Load the built kernel module by hand -insmod $BUILDDIR/../../driver/sysdig-probe.ko - -# For now, simply ensure that falco can run without errors. -FALCO_CMDLINE="$FALCO -c $BASEDIR/../falco.yaml -r $BASEDIR/../rules/falco_rules.yaml" -echo "Running falco: $FALCO_CMDLINE" -$FALCO_CMDLINE > $BASEDIR/falco.log 2>&1 & -FALCO_PID=$! -echo "Falco started, pid $FALCO_PID" -sleep 10 -if kill -0 $FALCO_PID > /dev/null 2>&1; then - echo "Falco ran successfully" - kill $FALCO_PID - ret=0 -else - echo "Falco did not start successfully. Full program output:" - cat $BASEDIR/falco.log - ret=1 -fi - -exit $ret diff --git a/test/run_regression_tests.sh b/test/run_regression_tests.sh new file mode 100755 index 00000000000..9f6b2a28863 --- /dev/null +++ b/test/run_regression_tests.sh @@ -0,0 +1,62 @@ +#!/bin/bash + +SCRIPT=$(readlink -f $0) +SCRIPTDIR=$(dirname $SCRIPT) +MULT_FILE=$SCRIPTDIR/falco_tests.yaml + +function download_trace_files() { + for TRACE in traces-positive traces-negative ; do + curl -so $SCRIPTDIR/$TRACE.zip https://s3.amazonaws.com/download.draios.com/falco-tests/$TRACE.zip && + unzip -d $SCRIPTDIR $SCRIPTDIR/$TRACE.zip && + rm -rf $SCRIPTDIR/$TRACE.zip + done +} + +function prepare_multiplex_file() { + echo "trace_files: !mux" > $MULT_FILE + + for trace in $SCRIPTDIR/traces-positive/*.scap ; do + [ -e "$trace" ] || continue + NAME=`basename $trace .scap` + cat << EOF >> $MULT_FILE + $NAME: + detect: True + trace_file: $trace +EOF + done + + for trace in $SCRIPTDIR/traces-negative/*.scap ; do + [ -e "$trace" ] || continue + NAME=`basename $trace .scap` + cat << EOF >> $MULT_FILE + $NAME: + detect: False + trace_file: $trace +EOF + done + + echo "Contents of $MULT_FILE:" + cat $MULT_FILE +} + +function run_tests() { + CMD="avocado run --multiplex $MULT_FILE --job-results-dir $SCRIPTDIR/job-results -- $SCRIPTDIR/falco_test.py" + echo "Running: $CMD" + $CMD + TEST_RC=$? +} + + +function print_test_failure_details() { + echo "Showing full job logs for any tests that failed:" + jq '.tests[] | select(.status != "PASS") | .logfile' $SCRIPTDIR/job-results/latest/results.json | xargs cat +} + +download_trace_files +prepare_multiplex_file +run_tests +if [ $TEST_RC -ne 0 ]; then + print_test_failure_details +fi + +exit $TEST_RC diff --git a/userspace/falco/lua/rule_loader.lua b/userspace/falco/lua/rule_loader.lua index 0e041f7e7f3..7a9774a7666 100644 --- a/userspace/falco/lua/rule_loader.lua +++ b/userspace/falco/lua/rule_loader.lua @@ -230,7 +230,7 @@ function describe_rule(name) end end -local rule_output_counts = {by_level={}, by_name={}} +local rule_output_counts = {total=0, by_level={}, by_name={}} for idx, level in ipairs(output.levels) do rule_output_counts[level] = 0 @@ -242,6 +242,7 @@ function on_event(evt_, rule_id) error ("rule_loader.on_event(): event with invalid rule_id: ", rule_id) end + rule_output_counts.total = rule_output_counts.total + 1 local rule = state.rules_by_idx[rule_id] if rule_output_counts.by_level[rule.level] == nil then @@ -260,6 +261,7 @@ function on_event(evt_, rule_id) end function print_stats() + print("Events detected: "..rule_output_counts.total) print("Rule counts by severity:") for idx, level in ipairs(output.levels) do -- To keep the output concise, we only print 0 counts for error, warning, and info levels From b3ae480facc6eaf6ed4b588e665beafa23b7ea3d Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Mon, 23 May 2016 15:32:12 -0700 Subject: [PATCH 13/20] Another round of rule cleanups. Do another round of rule cleanups now that we have a larger set of positive and negative trace files to work with. Outside of this commit, there are now trace files for all the positive rules, a docker-compose startup and teardown, and some trace files from the sysdig cloud staging environment. Also add a script that runs sysdig with a filter that removes all the syscalls not handled by falco as well as a few other high-volume, low-information syscalls. This script was used to create the staging environment trace files. Notable rule changes: - The direction for write_binary_dir/write_etc needs to be exit instead of enter, as the bin_dir clause works on the file descriptor returned by the open/openat call. - Add login as a trusted binary that can read sensitive files (occurs for direct console logins). - sshd can read sensitive files well after startup, so exclude it from the set of binaries that can trigger read_sensitive_file_trusted_after_startup. - limit run_shell_untrusted to non-containers. - Disable the ssh_error_syslog rule for now. With the current restriction on system calls (no read/write/sendto/recvfrom/etc), you won't see the ssh error messages. Nevertheless, add a string to look for to indicate ssh errors and add systemd's true location for the syslog device. - Sshd attemps to setuid even when it's not running as root, so exclude it from the set of binaries to monitor for now. - Let programs that are direct decendants of systemd spawn user management tasks for now. - Temporarily disable the EACCESS rule. This rule is exposing a bug in sysdig in debug mode, https://github.com/draios/sysdig/issues/598. The rule is also pretty noisy so I'll keep it disabled until the sysdig bug is fixed. - The etc_dir and bin_dir macros both have the problem that they match pathnames with /etc/, /bin/, etc in the middle of the path, as sysdig doesn't have a "begins with" comparison. Add notes for that. - Change spawn_process to spawned_process to indicate that it's for the exit side of the execve. Also use it in a few places that were looking for the same conditions without any macro. - Get rid of adduser_binaries and fold any programs not already present into shadowutils_binaries. - Add new groups sysdigcloud_binaries and sysdigcloud_binaries_parent and add them as exceptions for write_etc/write_binary_dir. - Add yum as a package management binary and add it as an exception to write_etc/write_binary_dir. - Change how db_program_spawned_process works. Since all of the useful information is on the exit side of the event, you can't really add a condition based on the process being new. Isntead, have the rule check for a non-database-related program being spawned by a database-related program. - Allow dragent to run shells. - Add sendmail, sendmail-msp as a program that attempts to setuid. - Some of the *_binaries macros that were based on dpkg -L accidentally contained directories in addition to end files. Trim those. - Add systemd-logind as a login_binary. - Add unix_chkpwd as a shadowutils_binary. - Add parentheses around any macros that group items using or. I found this necessary when the macro is used in the middle of a list of and conditions. - Break out system_binaries into a new subset user_mgmt_binaries containing login_, passwd_, and shadowutils_ binaries. That way you don't have to pull in all of system_binaries when looking for sensisitive files or user management activity. - Rename fs-bash to fbash, thinking ahead to its more likely name. --- rules/falco_rules.yaml | 131 +++++++++++++++++++++++---------------- test/utils/run_sysdig.sh | 9 +++ 2 files changed, 87 insertions(+), 53 deletions(-) create mode 100644 test/utils/run_sysdig.sh diff --git a/rules/falco_rules.yaml b/rules/falco_rules.yaml index b0cdb0ab5e7..1d451af991e 100644 --- a/rules/falco_rules.yaml +++ b/rules/falco_rules.yaml @@ -38,12 +38,16 @@ - macro: modify condition: rename or remove -- macro: spawn_process - condition: syscall.type = execve and evt.dir=< +- macro: spawned_process + condition: evt.type = execve and evt.dir=< # File categories - macro: terminal_file_fd condition: fd.name=/dev/ptmx or fd.directory=/dev/pts + +# This really should be testing that the directory begins with these +# prefixes but sysdig's filter doesn't have a "starts with" operator +# (yet). - macro: bin_dir condition: fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin) @@ -52,6 +56,8 @@ - macro: bin_dir_rename condition: evt.arg[1] contains /bin/ or evt.arg[1] contains /sbin/ or evt.arg[1] contains /usr/bin/ or evt.arg[1] contains /usr/sbin/ +# This really should be testing that the directory begins with /etc, +# but sysdig's filter doesn't have a "starts with" operator (yet). - macro: etc_dir condition: fd.directory contains /etc @@ -74,25 +80,31 @@ tac, link, chroot, vdir, chown, touch, ls, dd, uname, true, pwd, date, chgrp, chmod, mktemp, cat, mknod, sync, ln, false, rm, mv, cp, echo, readlink, sleep, stty, mkdir, df, dir, rmdir, touch) -- macro: adduser_binaries - condition: proc.name in (adduser, deluser, addgroup, delgroup) + +# dpkg -L login | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" "," - macro: login_binaries - condition: proc.name in (bin, login, su, sbin, nologin, bin, faillog, lastlog, newgrp, sg) + condition: proc.name in (login, systemd-logind, su, nologin, faillog, lastlog, newgrp, sg) -# dpkg -L passwd | grep bin | xargs -L 1 basename | tr "\\n" "," +# dpkg -L passwd | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" "," - macro: passwd_binaries condition: > - proc.name in (sbin, shadowconfig, sbin, grpck, pwunconv, grpconv, pwck, + proc.name in (shadowconfig, grpck, pwunconv, grpconv, pwck, groupmod, vipw, pwconv, useradd, newusers, cppw, chpasswd, usermod, - groupadd, groupdel, grpunconv, chgpasswd, userdel, bin, chage, chsh, + groupadd, groupdel, grpunconv, chgpasswd, userdel, chage, chsh, gpasswd, chfn, expiry, passwd, vigr, cpgr) -# repoquery -l shadow-utils | grep bin | xargs -L 1 basename | tr "\\n" "," +# repoquery -l shadow-utils | grep bin | xargs ls -ld | grep -v '^d' | awk '{print $9}' | xargs -L 1 basename | tr "\\n" "," - macro: shadowutils_binaries condition: > - proc.name in (chage, gpasswd, lastlog, newgrp, sg, adduser, chpasswd, - groupadd, groupdel, groupmems, groupmod, grpck, grpconv, grpunconv, - newusers, pwck, pwconv, pwunconv, useradd, userdel, usermod, vigr, vipw) + proc.name in (chage, gpasswd, lastlog, newgrp, sg, adduser, deluser, chpasswd, + groupadd, groupdel, addgroup, delgroup, groupmems, groupmod, grpck, grpconv, grpunconv, + newusers, pwck, pwconv, pwunconv, useradd, userdel, usermod, vigr, vipw, unix_chkpwd) + +- macro: sysdigcloud_binaries + condition: proc.name in (setup-backend, dragent) + +- macro: sysdigcloud_binaries_parent + condition: proc.pname in (setup-backend, dragent) - macro: docker_binaries condition: proc.name in (docker, exe) @@ -103,25 +115,33 @@ - macro: db_server_binaries condition: proc.name in (mysqld) +- macro: db_server_binaries_parent + condition: proc.pname in (mysqld) + - macro: server_binaries - condition: http_server_binaries or db_server_binaries or docker_binaries or proc.name in (sshd) + condition: (http_server_binaries or db_server_binaries or docker_binaries or proc.name in (sshd)) +# The truncated dpkg-preconfigu is intentional, process names are +# truncated at the sysdig level. - macro: package_mgmt_binaries - condition: proc.name in (dpkg, rpm) + condition: proc.name in (dpkg, dpkg-preconfigu, rpm, yum) # A canonical set of processes that run other programs with different # privileges or as a different user. - macro: userexec_binaries condition: proc.name in (sudo, su) +- macro: user_mgmt_binaries + condition: (login_binaries or passwd_binaries or shadowutils_binaries) + - macro: system_binaries - condition: coreutils_binaries or adduser_binaries or login_binaries or passwd_binaries or shadowutils_binaries + condition: (coreutils_binaries or user_mgmt_binaries) - macro: mail_binaries - condition: proc.name in (sendmail, postfix, procmail) + condition: proc.name in (sendmail, sendmail-msp, postfix, procmail) - macro: sensitive_files - condition: fd.name contains /etc/shadow or fd.name = /etc/sudoers or fd.directory = /etc/sudoers.d or fd.directory = /etc/pam.d or fd.name = /etc/pam.conf + condition: (fd.name contains /etc/shadow or fd.name = /etc/sudoers or fd.directory = /etc/sudoers.d or fd.directory = /etc/pam.d or fd.name = /etc/pam.conf) # Indicates that the process is new. Currently detected using time # since process was started, using a threshold of 5 seconds. @@ -130,7 +150,7 @@ # Network - macro: inbound - condition: (syscall.type=listen and evt.dir=>) or (syscall.type=accept and evt.dir=<) + condition: ((syscall.type=listen and evt.dir=>) or (syscall.type=accept and evt.dir=<)) # Currently sendto is an ignored syscall, otherwise this could also check for (syscall.type=sendto and evt.dir=>) - macro: outbound @@ -141,7 +161,7 @@ # Ssh - macro: ssh_error_message - condition: evt.arg.data contains "Invalid user" or evt.arg.data contains "preauth" + condition: (evt.arg.data contains "Invalid user" or evt.arg.data contains "preauth" or evt.arg.data contains "Failed password") # System - macro: modules @@ -149,9 +169,9 @@ - macro: container condition: container.id != host - macro: interactive - condition: (proc.aname=sshd and proc.name != sshd) or proc.name=systemd-logind + condition: ((proc.aname=sshd and proc.name != sshd) or proc.name=systemd-logind) - macro: syslog - condition: fd.name = /dev/log + condition: fd.name in (/dev/log, /run/systemd/journal/syslog) - macro: cron condition: proc.name in (cron, crond) - macro: parent_cron @@ -169,32 +189,32 @@ - rule: write_binary_dir desc: an attempt to write to any file below a set of binary directories - condition: evt.dir = > and open_write and bin_dir + condition: evt.dir = < and open_write and not package_mgmt_binaries and bin_dir output: "File below a known binary directory opened for writing (user=%user.name command=%proc.cmdline file=%fd.name)" priority: WARNING - rule: write_etc desc: an attempt to write to any file below /etc - condition: evt.dir = > and open_write and etc_dir + condition: evt.dir = < and open_write and not shadowutils_binaries and not sysdigcloud_binaries_parent and not package_mgmt_binaries and etc_dir output: "File below /etc opened for writing (user=%user.name command=%proc.cmdline file=%fd.name)" priority: WARNING - rule: read_sensitive_file_untrusted desc: an attempt to read any sensitive file (e.g. files containing user/password/authentication information). Exceptions are made for known trusted programs. - condition: open_read and not server_binaries and not userexec_binaries and not proc.name in (iptables, ps, systemd-logind, lsb_release, check-new-relea, dumpe2fs, accounts-daemon, bash) and not cron and sensitive_files + condition: open_read and not user_mgmt_binaries and not userexec_binaries and not proc.name in (iptables, ps, lsb_release, check-new-relea, dumpe2fs, accounts-daemon, bash, sshd) and not cron and sensitive_files output: "Sensitive file opened for reading by non-trusted program (user=%user.name command=%proc.cmdline file=%fd.name)" priority: WARNING - rule: read_sensitive_file_trusted_after_startup desc: an attempt to read any sensitive file (e.g. files containing user/password/authentication information) by a trusted program after startup. Trusted programs might read these files at startup to load initial state, but not afterwards. - condition: open_read and server_binaries and not proc_is_new and sensitive_files + condition: open_read and server_binaries and not proc_is_new and sensitive_files and proc.name!="sshd" output: "Sensitive file opened for reading by trusted program after startup (user=%user.name command=%proc.cmdline file=%fd.name)" priority: WARNING -- rule: db_program_spawn_process - desc: a database-server related program spawning a new process after startup. This shouldn\'t occur and is a follow on from some SQL injection attacks. - condition: db_server_binaries and not proc_is_new and spawn_process - output: "Database-related program spawned new process after startup (user=%user.name command=%proc.cmdline)" +- rule: db_program_spawned_process + desc: a database-server related program spawned a new process other than itself. This shouldn\'t occur and is a follow on from some SQL injection attacks. + condition: db_server_binaries_parent and not db_server_binaries and spawned_process + output: "Database-related program spawned process other than itself (user=%user.name program=%proc.cmdline parent=%proc.pname)" priority: WARNING - rule: modify_binary_dirs @@ -218,11 +238,12 @@ # output: "Loaded .so from unexpected dir (%user.name %proc.name %evt.dir %evt.type %evt.args %fd.name)" # priority: WARNING -- rule: syscall_returns_eaccess - desc: any system call that returns EACCESS. This is not always a strong indication of a problem, hence the INFO priority. - condition: evt.res = EACCESS - output: "System call returned EACCESS (user=%user.name command=%proc.cmdline syscall=%evt.type args=%evt.args)" - priority: INFO +# Temporarily disabling this rule as it's tripping over https://github.com/draios/sysdig/issues/598 +# - rule: syscall_returns_eaccess +# desc: any system call that returns EACCESS. This is not always a strong indication of a problem, hence the INFO priority. +# condition: evt.res = EACCESS +# output: "System call returned EACCESS (user=%user.name command=%proc.cmdline syscall=%evt.type args=%evt.args)" +# priority: INFO - rule: change_thread_namespace desc: an attempt to change a program/thread\'s namespace (commonly done as a part of creating a container) by calling setns. @@ -232,7 +253,7 @@ - rule: run_shell_untrusted desc: an attempt to spawn a shell by a non-shell program. Exceptions are made for trusted binaries. - condition: proc.name = bash and evt.dir=< and evt.type=execve and proc.pname exists and not parent_cron and not proc.pname in (bash, sshd, sudo, docker, su, tmux, screen, emacs, systemd, flock, fs-bash, nginx, monit, supervisord) + condition: not container and proc.name = bash and spawned_process and proc.pname exists and not parent_cron and not proc.pname in (bash, sshd, sudo, docker, su, tmux, screen, emacs, systemd, login, flock, fbash, nginx, monit, supervisord, dragent) output: "Shell spawned by untrusted binary (user=%user.name shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline)" priority: WARNING @@ -243,13 +264,13 @@ - rule: system_user_interactive desc: an attempt to run interactive commands by a system (i.e. non-login) user - condition: spawn_process and system_users and interactive + condition: spawned_process and system_users and interactive output: "System user ran an interactive command (user=%user.name command=%proc.cmdline)" priority: WARNING - rule: run_shell_in_container - desc: an attempt to spawn a shell by a non-shell program in a container. Container entrypoints are excluded. - condition: container and proc.name = bash and evt.dir=< and evt.type=execve and proc.pname exists and not proc.pname in (bash, docker) + desc: a shell was spawned by a non-shell program in a container. Container entrypoints are excluded. + condition: container and proc.name = bash and spawned_process and proc.pname exists and not proc.pname in (bash, docker) output: "Shell spawned in a container other than entrypoint (user=%user.name container_id=%container.id container_name=%container.name shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline)" priority: WARNING @@ -260,22 +281,26 @@ output: "Known system binary sent/received network traffic (user=%user.name command=%proc.cmdline connection=%fd.name)" priority: WARNING -- rule: ssh_error_syslog - desc: any ssh errors (failed logins, disconnects, ...) sent to syslog - condition: syslog and ssh_error_message and evt.dir = < - output: "sshd sent error message to syslog (error=%evt.buffer)" - priority: WARNING +# With the current restriction on system calls handled by falco +# (e.g. excluding read/write/sendto/recvfrom/etc, this rule won't +# trigger). +# - rule: ssh_error_syslog +# desc: any ssh errors (failed logins, disconnects, ...) sent to syslog +# condition: syslog and ssh_error_message and evt.dir = < +# output: "sshd sent error message to syslog (error=%evt.buffer)" +# priority: WARNING +# sshd, sendmail-msp, sendmail attempt to setuid to root even when running as non-root. Excluding here to avoid meaningless FPs - rule: non_sudo_setuid desc: an attempt to change users by calling setuid. sudo/su are excluded. user "root" is also excluded, as setuid calls typically involve dropping privileges. - condition: evt.type=setuid and evt.dir=> and not user.name=root and not userexec_binaries + condition: evt.type=setuid and evt.dir=> and not user.name=root and not userexec_binaries and not proc.name in (sshd, sendmail-msp, sendmail) output: "Unexpected setuid call by non-sudo, non-root program (user=%user.name command=%proc.cmdline uid=%evt.arg.uid)" priority: WARNING - rule: user_mgmt_binaries desc: activity by any programs that can manage users, passwords, or permissions. sudo and su are excluded. Activity in containers is also excluded--some containers create custom users on top of a base linux distribution at startup. - condition: spawn_process and not proc.name in (su, sudo) and not container and (adduser_binaries or login_binaries or passwd_binaries or shadowutils_binaries) - output: "User management binary command run outside of container (user=%user.name command=%proc.cmdline)" + condition: spawned_process and not proc.name in (su, sudo) and not container and user_mgmt_binaries and not parent_cron and not proc.pname in (systemd, run-parts) + output: "User management binary command run outside of container (user=%user.name command=%proc.cmdline parent=%proc.pname)" priority: WARNING # (we may need to add additional checks against false positives, see: https://bugs.launchpad.net/ubuntu/+source/rkhunter/+bug/86153) @@ -285,17 +310,17 @@ output: "File created below /dev by untrusted program (user=%user.name command=%proc.cmdline file=%fd.name)" priority: WARNING -# fs-bash is a restricted version of bash suitable for use in curl | sh installers. +# fbash is a small shell script that runs bash, and is suitable for use in curl | fbash installers. - rule: installer_bash_starts_network_server - desc: an attempt by any program that is a child of fs-bash to start listening for network connections - condition: evt.type=listen and proc.aname=fs-bash - output: "Unexpected listen call by a child process of fs-bash (command=%proc.cmdline)" + desc: an attempt by any program that is a child of fbash to start listening for network connections + condition: evt.type=listen and proc.aname=fbash + output: "Unexpected listen call by a child process of fbash (command=%proc.cmdline)" priority: WARNING - rule: installer_bash_starts_session - desc: an attempt by any program that is a child of fs-bash to start a new session (process group) - condition: evt.type=setsid and proc.aname=fs-bash - output: "Unexpected setsid call by a child process of fs-bash (command=%proc.cmdline)" + desc: an attempt by any program that is a child of fbash to start a new session (process group) + condition: evt.type=setsid and proc.aname=fbash + output: "Unexpected setsid call by a child process of fbash (command=%proc.cmdline)" priority: WARNING ########################### diff --git a/test/utils/run_sysdig.sh b/test/utils/run_sysdig.sh new file mode 100644 index 00000000000..9a1a611fde4 --- /dev/null +++ b/test/utils/run_sysdig.sh @@ -0,0 +1,9 @@ +#!/bin/bash + +# Run sysdig excluding all events that aren't used by falco and also +# excluding other high-volume events that aren't essential. This +# results in smaller trace files. + +# The remaining arguments are taken from the command line. + +exec sudo sysdig not evt.type in '(mprotect,brk,mq_timedreceive,mq_receive,mq_timedsend,mq_send,getrusage,procinfo,rt_sigprocmask,rt_sigaction,ioctl,clock_getres,clock_gettime,clock_nanosleep,clock_settime,close,epoll_create,epoll_create1,epoll_ctl,epoll_pwait,epoll_wait,eventfd,fcntl,fcntl64,fstat,fstat64,fstatat64,fstatfs,fstatfs64,futex,getitimer,gettimeofday,ioprio_get,ioprio_set,llseek,lseek,lstat,lstat64,mmap,mmap2,munmap,nanosleep,poll,ppoll,pread,pread64,preadv,procinfo,pselect6,pwrite,pwrite64,pwritev,read,readv,recv,recvfrom,recvmmsg,recvmsg,sched_yield,select,send,sendfile,sendfile64,sendmmsg,sendmsg,sendto,setitimer,settimeofday,shutdown,splice,stat,stat64,statfs,statfs64,switch,tee,timer_create,timer_delete,timerfd_create,timerfd_gettime,timerfd_settime,timer_getoverrun,timer_gettime,timer_settime,wait4,write,writev) and user.name!=ec2-user' $@ From 0f4b3787754507153213e8614a21071c2196c1db Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Mon, 23 May 2016 16:14:07 -0700 Subject: [PATCH 14/20] Add .gitignore for test directory. Exclude trace directories. --- .gitignore | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/.gitignore b/.gitignore index f392843bf7e..5b202663d24 100644 --- a/.gitignore +++ b/.gitignore @@ -1,4 +1,9 @@ /build* +*~ +test/falco_test.pyc +test/falco_tests.yaml +test/traces-negative +test/traces-positive userspace/falco/lua/re.lua userspace/falco/lua/lpeg.so From 31c87c295ad60ced6d7aeaf15102da44cf65b897 Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Tue, 31 May 2016 17:41:08 -0700 Subject: [PATCH 15/20] Update fbash rules to use proc.sname. Update fbash rules to use proc.sname instead of proc.aname and to rely on sessions instead of process ancestors. I also wanted to add details on the address/port being listened to but that's blocked on https://github.com/draios/falco/issues/86. Along with this change, there are new positive trace files installer-bash-starts-network-server.scap and installer-bash-starts-session.scap that test these updated rules. --- rules/falco_rules.yaml | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/rules/falco_rules.yaml b/rules/falco_rules.yaml index 1d451af991e..8ba8452c17e 100644 --- a/rules/falco_rules.yaml +++ b/rules/falco_rules.yaml @@ -312,15 +312,15 @@ # fbash is a small shell script that runs bash, and is suitable for use in curl | fbash installers. - rule: installer_bash_starts_network_server - desc: an attempt by any program that is a child of fbash to start listening for network connections - condition: evt.type=listen and proc.aname=fbash - output: "Unexpected listen call by a child process of fbash (command=%proc.cmdline)" + desc: an attempt by any program that is in a session led by fbash to start listening for network connections + condition: evt.type=listen and proc.sname=fbash + output: "Unexpected listen call by a process in a fbash session (command=%proc.cmdline)" priority: WARNING - rule: installer_bash_starts_session - desc: an attempt by any program that is a child of fbash to start a new session (process group) - condition: evt.type=setsid and proc.aname=fbash - output: "Unexpected setsid call by a child process of fbash (command=%proc.cmdline)" + desc: an attempt by any program that is in a session led by fbash to start a new session + condition: evt.type=setsid and proc.sname=fbash + output: "Unexpected setsid call by a process in fbash session (command=%proc.cmdline)" priority: WARNING ########################### From fc6d775e5b4e3d8b7bb87e60592be4132578bbe5 Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Wed, 1 Jun 2016 16:01:37 -0700 Subject: [PATCH 16/20] Add additional rules/tests for pipe installers. Add additional rules related to using pipe installers within a fbash session: - Modify write_etc to only trigger if *not* in a fbash session. There's a new rule write_etc_installer which has the same conditions when in a fbash session, logging at INFO severity. - A new rule write_rpm_database warns if any non package management program tries to write below /var/lib/rpm. - Add a new warning if any program below a fbash session tries to open an outbound network connection on ports other than http(s) and dns. - Add INFO level messages when programs in a fbash session try to run package management binaries (rpm,yum,etc) or service management (systemctl,chkconfig,etc) binaries. In order to test these new INFO level rules, make up a third class of trace files traces-info.zip containing trace files that should result in info-level messages. To differentiate warning and info level detection, add an attribute to the multiplex file "detect_level", which is "Warning" for the files in traces-positive and "Info" for the files in traces-info. Modify falco_test.py to look specifically for a non-zero count for the given detect_level. Doing this exposed a bug in the way the level-specific counts were being recorded--they were keeping counts by level name, not number. Fix that. --- .gitignore | 2 ++ rules/falco_rules.yaml | 54 +++++++++++++++++++++++++---- test/falco_test.py | 25 ++++++++++--- test/run_regression_tests.sh | 15 +++++++- userspace/falco/lua/rule_loader.lua | 15 ++++---- 5 files changed, 91 insertions(+), 20 deletions(-) diff --git a/.gitignore b/.gitignore index 5b202663d24..7a98caebb6b 100644 --- a/.gitignore +++ b/.gitignore @@ -4,6 +4,8 @@ test/falco_test.pyc test/falco_tests.yaml test/traces-negative test/traces-positive +test/traces-info +test/job-results userspace/falco/lua/re.lua userspace/falco/lua/lpeg.so diff --git a/rules/falco_rules.yaml b/rules/falco_rules.yaml index 8ba8452c17e..9b5c6097186 100644 --- a/rules/falco_rules.yaml +++ b/rules/falco_rules.yaml @@ -124,7 +124,7 @@ # The truncated dpkg-preconfigu is intentional, process names are # truncated at the sysdig level. - macro: package_mgmt_binaries - condition: proc.name in (dpkg, dpkg-preconfigu, rpm, yum) + condition: proc.name in (dpkg, dpkg-preconfigu, rpm, rpmkey, yum) # A canonical set of processes that run other programs with different # privileges or as a different user. @@ -141,7 +141,7 @@ condition: proc.name in (sendmail, sendmail-msp, postfix, procmail) - macro: sensitive_files - condition: (fd.name contains /etc/shadow or fd.name = /etc/sudoers or fd.directory = /etc/sudoers.d or fd.directory = /etc/pam.d or fd.name = /etc/pam.conf) + condition: (fd.name contains /etc/shadow or fd.name = /etc/sudoers or fd.directory in (/etc/sudoers.d, /etc/pam.d) or fd.name = /etc/pam.conf) # Indicates that the process is new. Currently detected using time # since process was started, using a threshold of 5 seconds. @@ -194,11 +194,18 @@ priority: WARNING - rule: write_etc - desc: an attempt to write to any file below /etc - condition: evt.dir = < and open_write and not shadowutils_binaries and not sysdigcloud_binaries_parent and not package_mgmt_binaries and etc_dir + desc: an attempt to write to any file below /etc, not in a pipe installer session + condition: evt.dir = < and open_write and not shadowutils_binaries and not sysdigcloud_binaries_parent and not package_mgmt_binaries and etc_dir and not proc.sname=fbash output: "File below /etc opened for writing (user=%user.name command=%proc.cmdline file=%fd.name)" priority: WARNING +# Within a fbash session, the severity is lowered to INFO +- rule: write_etc_installer + desc: an attempt to write to any file below /etc, in a pipe installer session + condition: evt.dir = < and open_write and not shadowutils_binaries and not sysdigcloud_binaries_parent and not package_mgmt_binaries and etc_dir and proc.sname=fbash + output: "File below /etc opened for writing (user=%user.name command=%proc.cmdline file=%fd.name) within pipe installer session" + priority: INFO + - rule: read_sensitive_file_untrusted desc: an attempt to read any sensitive file (e.g. files containing user/password/authentication information). Exceptions are made for known trusted programs. condition: open_read and not user_mgmt_binaries and not userexec_binaries and not proc.name in (iptables, ps, lsb_release, check-new-relea, dumpe2fs, accounts-daemon, bash, sshd) and not cron and sensitive_files @@ -211,6 +218,13 @@ output: "Sensitive file opened for reading by trusted program after startup (user=%user.name command=%proc.cmdline file=%fd.name)" priority: WARNING +# Only let rpm-related programs write to the rpm database +- rule: write_rpm_database + desc: an attempt to write to the rpm database by any non-rpm related program + condition: open_write and not proc.name in (rpm,rpmkey,yum) and fd.directory=/var/lib/rpm + output: "Rpm database opened for writing by a non-rpm program (command=%proc.cmdline file=%fd.name)" + priority: WARNING + - rule: db_program_spawned_process desc: a database-server related program spawned a new process other than itself. This shouldn\'t occur and is a follow on from some SQL injection attacks. condition: db_server_binaries_parent and not db_server_binaries and spawned_process @@ -312,17 +326,45 @@ # fbash is a small shell script that runs bash, and is suitable for use in curl | fbash installers. - rule: installer_bash_starts_network_server - desc: an attempt by any program that is in a session led by fbash to start listening for network connections + desc: an attempt by a program in a pipe installer session to start listening for network connections condition: evt.type=listen and proc.sname=fbash output: "Unexpected listen call by a process in a fbash session (command=%proc.cmdline)" priority: WARNING - rule: installer_bash_starts_session - desc: an attempt by any program that is in a session led by fbash to start a new session + desc: an attempt by a program in a pipe installer session to start a new session condition: evt.type=setsid and proc.sname=fbash output: "Unexpected setsid call by a process in fbash session (command=%proc.cmdline)" priority: WARNING +- rule: installer_bash_non_https_connection + desc: an attempt by a program in a pipe installer session to make an outgoing connection on a non-http(s) port + condition: outbound and not fd.sport in (80, 443, 53) and proc.sname=fbash + output: "Outbound connection on non-http(s) port by a process in a fbash session (command=%proc.cmdline connection=%fd.name)" + priority: WARNING + +# It'd be nice if we could warn when processes in a fbash session try +# to download from any nonstandard location? This is probably blocked +# on https://github.com/draios/falco/issues/88 though. + +# Notice when processes try to run chkconfig/systemctl.... to install a service. +# Note: this is not a WARNING, as you'd expect some service management +# as a part of doing the installation. +- rule: installer_bash_manages_service + desc: an attempt by a program in a pipe installer session to manage a system service (systemd/chkconfig) + condition: evt.type=execve and proc.name in (chkconfig, systemctl) and proc.sname=fbash + output: "Service management program run by process in a fbash session (command=%proc.cmdline)" + priority: INFO + +# Notice when processes try to run any package management binary within a fbash session. +# Note: this is not a WARNING, as you'd expect some package management +# as a part of doing the installation +- rule: installer_bash_runs_pkgmgmt + desc: an attempt by a program in a pipe installer session to run a package management binary + condition: evt.type=execve and package_mgmt_binaries and proc.sname=fbash + output: "Package management program run by process in a fbash session (command=%proc.cmdline)" + priority: INFO + ########################### # Application-Related Rules ########################### diff --git a/test/falco_test.py b/test/falco_test.py index 72875c1c063..a2ff0847080 100644 --- a/test/falco_test.py +++ b/test/falco_test.py @@ -18,6 +18,9 @@ def setUp(self): self.should_detect = self.params.get('detect', '*') self.trace_file = self.params.get('trace_file', '*') + if self.should_detect: + self.detect_level = self.params.get('detect_level', '*') + # Doing this in 2 steps instead of simply using # module_is_loaded to avoid logging lsmod output to the log. lsmod_output = process.system_output("lsmod", verbose=False) @@ -44,17 +47,29 @@ def test(self): cmd, res.exit_status)) # Get the number of events detected. - res = re.search('Events detected: (\d+)', res.stdout) - if res is None: + match = re.search('Events detected: (\d+)', res.stdout) + if match is None: self.fail("Could not find a line 'Events detected: ' in falco output") - events_detected = int(res.group(1)) + events_detected = int(match.group(1)) if not self.should_detect and events_detected > 0: self.fail("Detected {} events when should have detected none".format(events_detected)) - if self.should_detect and events_detected == 0: - self.fail("Detected {} events when should have detected > 0".format(events_detected)) + if self.should_detect: + if events_detected == 0: + self.fail("Detected {} events when should have detected > 0".format(events_detected)) + + level_line = '{}: (\d+)'.format(self.detect_level) + match = re.search(level_line, res.stdout) + + if match is None: + self.fail("Could not find a line '{}: ' in falco output".format(self.detect_level)) + + events_detected = int(match.group(1)) + + if not events_detected > 0: + self.fail("Detected {} events at level {} when should have detected > 0".format(events_detected, self.detect_level)) pass diff --git a/test/run_regression_tests.sh b/test/run_regression_tests.sh index 9f6b2a28863..1057ab61f1d 100755 --- a/test/run_regression_tests.sh +++ b/test/run_regression_tests.sh @@ -5,7 +5,8 @@ SCRIPTDIR=$(dirname $SCRIPT) MULT_FILE=$SCRIPTDIR/falco_tests.yaml function download_trace_files() { - for TRACE in traces-positive traces-negative ; do + for TRACE in traces-positive traces-negative traces-info ; do + rm -rf $SCRIPTDIR/$TRACE curl -so $SCRIPTDIR/$TRACE.zip https://s3.amazonaws.com/download.draios.com/falco-tests/$TRACE.zip && unzip -d $SCRIPTDIR $SCRIPTDIR/$TRACE.zip && rm -rf $SCRIPTDIR/$TRACE.zip @@ -21,6 +22,7 @@ function prepare_multiplex_file() { cat << EOF >> $MULT_FILE $NAME: detect: True + detect_level: Warning trace_file: $trace EOF done @@ -35,6 +37,17 @@ EOF EOF done + for trace in $SCRIPTDIR/traces-info/*.scap ; do + [ -e "$trace" ] || continue + NAME=`basename $trace .scap` + cat << EOF >> $MULT_FILE + $NAME: + detect: True + detect_level: Informational + trace_file: $trace +EOF + done + echo "Contents of $MULT_FILE:" cat $MULT_FILE } diff --git a/userspace/falco/lua/rule_loader.lua b/userspace/falco/lua/rule_loader.lua index 7a9774a7666..6f07e701a7d 100644 --- a/userspace/falco/lua/rule_loader.lua +++ b/userspace/falco/lua/rule_loader.lua @@ -102,14 +102,13 @@ function set_output(output_format, state) end local function priority(s) - valid_levels = {"emergency", "alert", "critical", "error", "warning", "notice", "informational", "debug"} s = string.lower(s) - for i,v in ipairs(valid_levels) do - if (string.find(v, "^"..s)) then + for i,v in ipairs(output.levels) do + if (string.find(string.lower(v), "^"..s)) then return i - 1 -- (syslog levels start at 0, lua indices start at 1) end end - error("Invalid severity level: "..level) + error("Invalid severity level: "..s) end -- Note that the rules_by_name and rules_by_idx refer to the same rule @@ -232,8 +231,8 @@ end local rule_output_counts = {total=0, by_level={}, by_name={}} -for idx, level in ipairs(output.levels) do - rule_output_counts[level] = 0 +for idx=0,table.getn(output.levels)-1,1 do + rule_output_counts.by_level[idx] = 0 end function on_event(evt_, rule_id) @@ -265,8 +264,8 @@ function print_stats() print("Rule counts by severity:") for idx, level in ipairs(output.levels) do -- To keep the output concise, we only print 0 counts for error, warning, and info levels - if rule_output_counts[level] > 0 or level == "Error" or level == "Warning" or level == "Informational" then - print (" "..level..": "..rule_output_counts[level]) + if rule_output_counts.by_level[idx-1] > 0 or level == "Error" or level == "Warning" or level == "Informational" then + print (" "..level..": "..rule_output_counts.by_level[idx-1]) end end From 23322700b4542ad57a2ff3196d28cadd80cf948f Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Tue, 7 Jun 2016 10:18:16 -0700 Subject: [PATCH 17/20] Migrate README contents to wiki. Split up the content from the README into individual pages in the wiki--that's in a separate change. --- README.md | 320 +++++------------------------------------------------- 1 file changed, 26 insertions(+), 294 deletions(-) diff --git a/README.md b/README.md index 9520e19f9a6..c91c7590682 100644 --- a/README.md +++ b/README.md @@ -5,303 +5,31 @@ **v0.1.0** Read the [change log](https://github.com/draios/falco/blob/dev/CHANGELOG.md) -This is the initial falco release. Note that much of falco's code comes from -[sysdig](https://github.com/draios/sysdig), so overall stability is very good -for an early release. On the other hand performance is still a work in -progress. On busy hosts and/or with large rule sets, you may see the current -version of falco using high CPU. Expect big improvements in coming releases. - Dev Branch: [![Build Status](https://travis-ci.org/draios/falco.svg?branch=dev)](https://travis-ci.org/draios/falco)
Master Branch: [![Build Status](https://travis-ci.org/draios/falco.svg?branch=master)](https://travis-ci.org/draios/falco) -####Table of Contents - -- [Overview](#overview) -- [Rules](#rules) -- [Configuration](#configuration) -- [Installation](#installation) -- [Running Falco](#running-falco) - - ## Overview Sysdig Falco is a behavioral activity monitor designed to detect anomalous activity in your applications. Powered by sysdig’s system call capture infrastructure, falco lets you continuously monitor and detect container, application, host, and network activity... all in one place, from one source of data, with one set of rules. - #### What kind of behaviors can Falco detect? Falco can detect and alert on any behavior that involves making Linux system calls. Thanks to Sysdig's core decoding and state tracking functionality, falco alerts can be triggered by the use of specific system calls, their arguments, and by properties of the calling process. For example, you can easily detect things like: + - A shell is run inside a container - A server process spawns a child process of an unexpected type -- Unexpected read of a sensitive file (like `/etc/passwd`) +- Unexpected read of a sensitive file (like `/etc/shadow`) - A non-device file is written to `/dev` - A standard system binary (like `ls`) makes an outbound network connection -#### How you use it - -Falco is deployed as a long-running daemon. You can install it as a debian/rpm -package on a regular host or container host, or you can deploy it as a -container. - -Falco is configured via a rules file defining the behaviors and events to -watch for, and a general configuration file. Rules are expressed in a -high-level, human-readable language. We've provided a sample rule file -`./rules/falco_rules.yaml` as a starting point - you can (and will likely -want!) to adapt it to your environment. - -When developing rules, one helpful feature is falco's ability to read trace -files saved by sysdig. This allows you to "record" the offending behavior -once, and replay it with falco as many times as needed while tweaking your -rules. - -Once deployed, falco uses the Sysdig kernel module and userspace libraries to -watch for any events matching one of the conditions defined in the rule -file. If a matching event occurs, a notification is written to the the -configured output(s). - - -## Rules - -_Call for contributions: If you come up with additional rules which you'd like to see in the core repository - PR welcome!_ - -A falco rules file is comprised of two kinds of elements: rules and macro definitions. Macros are simply definitions that can be re-used inside rules and other macros, providing a way to factor out and name common patterns. - -#### Conditions - -The key part of a rule is the _condition_ field. A condition is simply a boolean predicate on sysdig events. -Conditions are expressed using the Sysdig [filter syntax](http://www.sysdig.org/wiki/sysdig-user-guide/#filtering). Any Sysdig filter is a valid falco condition (with the caveat of certain excluded system calls, discussed below). In addition, falco expressions can contain _macro_ terms, which are not present in Sysdig syntax. - -Here's an example of a condition that alerts whenever a bash shell is run inside a container: - -`container.id != host and proc.name = bash` - -The first clause checks that the event happened in a container (sysdig events have a `container` field that is equal to "host" if the event happened on a regular host). The second clause checks that the process name is `bash`. Note that this condition does not even include a clause with system call! It only uses event metadata. As such, if a bash shell does start up in a container, falco will output events for every syscall that is done by that shell. - -_Tip: If you're new to sysdig and unsure what fields are available, run `sysdig -l` to see the list of supported fields._ - -#### Rules - -Along with a condition, each rule includes the following fields: - -* _rule_: a short unique name for the rule -* _desc_: a longer description of what the rule detects -* _output_ and _priority_: The output format specifies the message that should be output if a matching event occurs, and follows the Sysdig [output format syntax](http://www.sysdig.org/wiki/sysdig-user-guide/#output-formatting). The priority is a case-insensitive representation of severity and should be one of "emergency", "alert", "critical", "error", "warning", "notice", "informational", or "debug". - -A complete rule using the above condition might be: - -```yaml -- condition: container.id != host and proc.name = bash - output: "shell in a container (%user.name %container.id %proc.name %evt.dir %evt.type %evt.args %fd.name)" - priority: WARNING -``` - -#### Macros -As noted above, macros provide a way to define common sub-portions of rules in a reusable way. As a very simple example, if we had many rules for events happening in containers, we might to define a `in_container` macro: - -```yaml -- macro: in_container - condition: container.id != host -``` - -With this macro defined, we can then rewrite the above rule's condition as `in_container and proc.name = bash`. - -For many more examples of rules and macros, please take a look at the accompanying [rules file](rules/falco_rules.yaml). - - -#### Ignored system calls - -For performance reasons, some system calls are currently discarded before falco processing. The current list is: -`clock_getres,clock_gettime,clock_nanosleep,clock_settime,close,epoll_create,epoll_create1,epoll_ctl,epoll_pwait,epoll_wait,eventfd,fcntl,fcntl64,fstat,fstat64,fstatat64,fstatfs,fstatfs64,futex,getitimer,gettimeofday,ioprio_get,ioprio_set,llseek,lseek,lstat,lstat64,mmap,mmap2,munmap,nanosleep,poll,ppoll,pread64,preadv,procinfo,pselect6,pwrite64,pwritev,read,readv,recv,recvfrom,recvmmsg,recvmsg,sched_yield,select,send,sendfile,sendfile64,sendmmsg,sendmsg,sendto,setitimer,settimeofday,shutdown,splice,stat,stat64,statfs,statfs64,switch,tee,timer_create,timer_delete,timerfd_create,timerfd_gettime,timerfd_settime,timer_getoverrun,timer_gettime,timer_settime,wait4,write,writev` - - -## Configuration - -General configuration is done via a separate yaml file. The -[config file](falco.yaml) in this repo has comments describing the various -configuration options. - - -## Installation -#### Scripted install - -To install falco automatically in one step, simply run the following command as root or with sudo: - -`curl -s https://s3.amazonaws.com/download.draios.com/stable/install-falco | sudo bash` - -#### Package install - -##### RHEL - -- Trust the Draios GPG key and configure the yum repository -``` -rpm --import https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public -curl -s -o /etc/yum.repos.d/draios.repo http://download.draios.com/stable/rpm/draios.repo -``` -- Install the EPEL repository - -Note: The following command is required only if DKMS is not available in the distribution. You can verify if DKMS is available with yum list dkms - -`rpm -i http://mirror.us.leaseweb.net/epel/6/i386/epel-release-6-8.noarch.rpm` - -- Install kernel headers - -Warning: The following command might not work with any kernel. Make sure to customize the name of the package properly - -`yum -y install kernel-devel-$(uname -r)` - -- Install falco - -`yum -y install falco` - - -To uninstall, just do `yum erase falco`. - -##### Debian - -- Trust the Draios GPG key, configure the apt repository, and update the package list - -``` -curl -s https://s3.amazonaws.com/download.draios.com/DRAIOS-GPG-KEY.public | apt-key add - -curl -s -o /etc/apt/sources.list.d/draios.list http://download.draios.com/stable/deb/draios.list -apt-get update -``` - -- Install kernel headers - -Warning: The following command might not work with any kernel. Make sure to customize the name of the package properly - -`apt-get -y install linux-headers-$(uname -r)` - -- Install falco - -`apt-get -y install falco` - -To uninstall, just do `apt-get remove falco`. - - -##### Container install (general) - -If you have full control of your host operating system, then installing falco using the normal installation method is the recommended best practice. This method allows full visibility into all containers on the host OS. No changes to the standard automatic/manual installation procedures are required. - -However, falco can also run inside a Docker container. To guarantee a smooth deployment, the kernel headers must be installed in the host operating system, before running Falco. - -This can usually be done on Debian-like distributions with: -`apt-get -y install linux-headers-$(uname -r)` - -Or, on RHEL-like distributions: -`yum -y install kernel-devel-$(uname -r)` - -Falco can then be run with: - -``` -docker pull sysdig/falco -docker run -i -t --name falco --privileged -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro sysdig/falco -``` - -##### Container install (CoreOS) - -The recommended way to run falco on CoreOS is inside of its own Docker container using the install commands in the paragraph above. This method allows full visibility into all containers on the host OS. - -This method is automatically updated, includes some nice features such as automatic setup and bash completion, and is a generic approach that can be used on other distributions outside CoreOS as well. - -However, some users may prefer to run falco in the CoreOS toolbox. While not the recommended method, this can be achieved by installing Falco inside the toolbox using the normal installation method, and then manually running the sysdig-probe-loader script: - -``` -toolbox --bind=/dev --bind=/var/run/docker.sock -curl -s https://s3.amazonaws.com/download.draios.com/stable/install-falco | bash -sysdig-probe-loader -``` - - - -## Running Falco - -Falco is intended to be run as a service. But for experimentation and designing/testing rulesets, you will likely want to run it manually from the command-line. - -#### Running falco as a service (after installing package) - -`service falco start` - -#### Running falco in a container - -`docker run -i -t --name falco --privileged -v /var/run/docker.sock:/host/var/run/docker.sock -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro sysdig/falco` - -#### Running falco manually - -Do `falco --help` to see the command-line options available when running manually. - - -## Building and running falco locally from source -Building falco requires having `cmake` and `g++` installed. - - -#### Building falco -Clone this repo in a directory that also contains the sysdig source repo. The result should be something like: - -``` -22:50 vagrant@vagrant-ubuntu-trusty-64:/sysdig -$ pwd -/sysdig -22:50 vagrant@vagrant-ubuntu-trusty-64:/sysdig -$ ls -l -total 20 -drwxr-xr-x 1 vagrant vagrant 238 Feb 21 21:44 falco -drwxr-xr-x 1 vagrant vagrant 646 Feb 21 17:41 sysdig -``` - -create a build dir, then setup cmake and run make from that dir: - -``` -$ mkdir build -$ cd build -$ cmake .. -$ make -``` - -as a result, you should have a falco executable in `build/userspace/falco/falco`. - -#### Load latest sysdig kernel module - -If you have a binary version of sysdig installed, an older sysdig kernel module may already be loaded. To ensure you are using the latest version, you should unload any existing sysdig kernel module and load the locally built version. - -Unload any existing kernel module via: - -`$ rmmod sysdig_probe` - -To load the locally built version, assuming you are in the `build` dir, use: - -`$ insmod driver/sysdig-probe.ko` - -#### Running falco - -Assuming you are in the `build` dir, you can run falco as: - -`$ sudo ./userspace/falco/falco -c ../falco.yaml -r ../rules/falco_rules.yaml` - -Or instead you can try using some of the simpler rules files in `rules`. Or to get started, try creating a file with this: - -Create a file with some [Falco rules](Rule-syntax-and-design). For example: -``` -- macro: open_write - condition: > - (evt.type=open or evt.type=openat) and - fd.typechar='f' and - (evt.arg.flags contains O_WRONLY or - evt.arg.flags contains O_RDWR or - evt.arg.flags contains O_CREAT or - evt.arg.flags contains O_TRUNC) - -- macro: bin_dir - condition: fd.directory in (/bin, /sbin, /usr/bin, /usr/sbin) - -- rule: write_binary_dir - desc: an attempt to write to any file below a set of binary directories - condition: evt.dir = > and open_write and bin_dir - output: "File below a known binary directory opened for writing (user=%user.name command=%proc.cmdline file=%fd.name)" - priority: WARNING - -``` +This is the initial falco release. Note that much of falco's code comes from +[sysdig](https://github.com/draios/sysdig), so overall stability is very good +for an early release. On the other hand performance is still a work in +progress. On busy hosts and/or with large rule sets, you may see the current +version of falco using high CPU. Expect big improvements in coming releases. -And you will see an output event for any interactive process that touches a file with "sysdig" or ".txt" in its name! +Documentation +--- +[Visit the wiki] (https://github.com/draios/falco/wiki) for full documentation on falco. Join the Community --- @@ -322,22 +50,26 @@ Contributor License Agreements We’ve modeled our CLA off of industry standards, such as [the CLA used by Kubernetes](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md). Note that this agreement is not a transfer of copyright ownership, this simply is a license agreement for contributions, intended to clarify the intellectual property license granted with contributions from any person or entity. It is for your protection as a contributor as well as the protection of falco; it does not change your rights to use your own contributions for any other purpose. For some background on why contributor license agreements are necessary, you can read FAQs from many other open source projects: - - [Django’s excellent CLA FAQ](https://www.djangoproject.com/foundation/cla/faq/) - - [A well-written chapter from Karl Fogel’s Producing Open Source Software on CLAs](http://producingoss.com/en/copyright-assignment.html) - - [The Wikipedia article on CLAs](http://en.wikipedia.org/wiki/Contributor_license_agreement) - As always, we are grateful for your past and present contributions to falco. +- [Django’s excellent CLA FAQ](https://www.djangoproject.com/foundation/cla/faq/) +- [A well-written chapter from Karl Fogel’s Producing Open Source Software on CLAs](http://producingoss.com/en/copyright-assignment.html) +- [The Wikipedia article on CLAs](http://en.wikipedia.org/wiki/Contributor_license_agreement) - ###What do I need to do in order to contribute code? - **Individual contributions**: Individuals who wish to make contributions must review the [Individual Contributor License Agreement](./cla/falco_contributor_agreement.txt) and indicate agreement by adding the following line to every GIT commit message: +As always, we are grateful for your past and present contributions to falco. - falco-CLA-1.0-signed-off-by: Joe Smith +###What do I need to do in order to contribute code? - Use your real name; pseudonyms or anonymous contributions are not allowed. +**Individual contributions**: Individuals who wish to make contributions must review the [Individual Contributor License Agreement](./cla/falco_contributor_agreement.txt) and indicate agreement by adding the following line to every GIT commit message: - **Corporate contributions**: Employees of corporations, members of LLCs or LLPs, or others acting on behalf of a contributing entity, must review the [Corporate Contributor License Agreement](./cla/falco_corp_contributor_agreement.txt), must be an authorized representative of the contributing entity, and indicate agreement to it on behalf of the contributing entity by adding the following lines to every GIT commit message: +falco-CLA-1.0-signed-off-by: Joe Smith - falco-CLA-1.0-contributing-entity: Full Legal Name of Entity - falco-CLA-1.0-signed-off-by: Joe Smith +Use your real name; pseudonyms or anonymous contributions are not allowed. + +**Corporate contributions**: Employees of corporations, members of LLCs or LLPs, or others acting on behalf of a contributing entity, must review the [Corporate Contributor License Agreement](./cla/falco_corp_contributor_agreement.txt), must be an authorized representative of the contributing entity, and indicate agreement to it on behalf of the contributing entity by adding the following lines to every GIT commit message: + +``` + falco-CLA-1.0-contributing-entity: Full Legal Name of Entity + falco-CLA-1.0-signed-off-by: Joe Smith +``` - Use a real name of a natural person who is an authorized representative of the contributing entity; pseudonyms or anonymous contributions are not allowed. +Use a real name of a natural person who is an authorized representative of the contributing entity; pseudonyms or anonymous contributions are not allowed. From 52a7c775960fb1752c47c49db2257eaee8f308da Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Mon, 6 Jun 2016 16:52:40 -0700 Subject: [PATCH 18/20] Add more useful json output. Instead of using sysdig's json output, which only contains the fields from the format string without any formatting text, use the string output to build a json object containing the format string, rule name, severity, and the event time (converted to a json-friendly ISO8601). This fixes https://github.com/draios/falco/issues/82. --- userspace/falco/falco.cpp | 13 +------- userspace/falco/formats.cpp | 48 +++++++++++++++++++++++++++-- userspace/falco/formats.h | 2 +- userspace/falco/lua/output.lua | 16 +++++----- userspace/falco/lua/rule_loader.lua | 2 +- 5 files changed, 56 insertions(+), 25 deletions(-) diff --git a/userspace/falco/falco.cpp b/userspace/falco/falco.cpp index b73f33a3af1..d269a4737f2 100644 --- a/userspace/falco/falco.cpp +++ b/userspace/falco/falco.cpp @@ -242,7 +242,6 @@ int falco_init(int argc, char **argv) sinsp* inspector = NULL; falco_rules* rules = NULL; int op; - sinsp_evt::param_fmt event_buffer_format; int long_index = 0; string lua_main_filename; string scap_filename; @@ -391,7 +390,7 @@ int falco_init(int argc, char **argv) rules = new falco_rules(inspector, ls, lua_main_filename); - falco_formats::init(inspector, ls); + falco_formats::init(inspector, ls, config.m_json_output); falco_fields::init(inspector, ls); falco_logger::init(ls); @@ -416,16 +415,6 @@ int falco_init(int argc, char **argv) inspector->set_hostname_and_port_resolution_mode(false); - if (config.m_json_output) - { - event_buffer_format = sinsp_evt::PF_JSON; - } - else - { - event_buffer_format = sinsp_evt::PF_NORMAL; - } - inspector->set_buffer_format(event_buffer_format); - for(std::vector::iterator it = config.m_outputs.begin(); it != config.m_outputs.end(); ++it) { add_output(ls, *it); diff --git a/userspace/falco/formats.cpp b/userspace/falco/formats.cpp index 0ff87068c45..142df600003 100644 --- a/userspace/falco/formats.cpp +++ b/userspace/falco/formats.cpp @@ -1,8 +1,11 @@ +#include + #include "formats.h" #include "logger.h" sinsp* falco_formats::s_inspector = NULL; +bool s_json_output = false; const static struct luaL_reg ll_falco [] = { @@ -11,9 +14,10 @@ const static struct luaL_reg ll_falco [] = {NULL,NULL} }; -void falco_formats::init(sinsp* inspector, lua_State *ls) +void falco_formats::init(sinsp* inspector, lua_State *ls, bool json_output) { s_inspector = inspector; + s_json_output = json_output; luaL_openlib(ls, "falco", ll_falco, 0); } @@ -42,15 +46,53 @@ int falco_formats::format_event (lua_State *ls) { string line; - if (!lua_islightuserdata(ls, -1) || !lua_islightuserdata(ls, -2)) { + if (!lua_islightuserdata(ls, -1) || + !lua_isstring(ls, -2) || + !lua_isstring(ls, -3) || + !lua_islightuserdata(ls, -4)) { falco_logger::log(LOG_ERR, "Invalid arguments passed to format_event()\n"); throw sinsp_exception("format_event error"); } sinsp_evt* evt = (sinsp_evt*)lua_topointer(ls, 1); - sinsp_evt_formatter* formatter = (sinsp_evt_formatter*)lua_topointer(ls, 2); + const char *rule = (char *) lua_tostring(ls, 2); + const char *level = (char *) lua_tostring(ls, 3); + sinsp_evt_formatter* formatter = (sinsp_evt_formatter*)lua_topointer(ls, 4); formatter->tostring(evt, &line); + // For JSON output, the formatter returned just the output + // string containing the format text and values. Use this to + // build a more detailed object containing the event time, + // rule, severity, full output, and fields. + if (s_json_output) { + Json::Value event; + Json::FastWriter writer; + + // Convert the time-as-nanoseconds to a more json-friendly ISO8601. + time_t evttime = evt->get_ts()/1000000000; + char time_sec[20]; // sizeof "YYYY-MM-DDTHH:MM:SS" + char time_ns[12]; // sizeof ".sssssssssZ" + string iso8601evttime; + + strftime(time_sec, sizeof(time_sec), "%FT%T", gmtime(&evttime)); + snprintf(time_ns, sizeof(time_ns), ".%09luZ", evt->get_ts() % 1000000000); + iso8601evttime = time_sec; + iso8601evttime += time_ns; + event["time"] = iso8601evttime; + event["rule"] = rule; + event["priority"] = level; + event["output"] = line; + + line = writer.write(event); + + // Json::FastWriter may add a trailing newline. If it + // does, remove it. + if (line[line.length()-1] == '\n') + { + line.resize(line.length()-1); + } + } + lua_pushstring(ls, line.c_str()); return 1; } diff --git a/userspace/falco/formats.h b/userspace/falco/formats.h index 73f69b0de57..6f369bf3e48 100644 --- a/userspace/falco/formats.h +++ b/userspace/falco/formats.h @@ -13,7 +13,7 @@ class sinsp_evt_formatter; class falco_formats { public: - static void init(sinsp* inspector, lua_State *ls); + static void init(sinsp* inspector, lua_State *ls, bool json_output); // formatter = falco.formatter(format_string) static int formatter(lua_State *ls); diff --git a/userspace/falco/lua/output.lua b/userspace/falco/lua/output.lua index 0bef1712ab3..245f5cb4891 100644 --- a/userspace/falco/lua/output.lua +++ b/userspace/falco/lua/output.lua @@ -6,10 +6,10 @@ mod.levels = levels local outputs = {} -function mod.stdout(evt, level, format) +function mod.stdout(evt, rule, level, format) format = "*%evt.time: "..levels[level+1].." "..format formatter = falco.formatter(format) - msg = falco.format_event(evt, formatter) + msg = falco.format_event(evt, rule, levels[level+1], formatter) print (msg) end @@ -26,26 +26,26 @@ function mod.file_validate(options) end -function mod.file(evt, level, format, options) +function mod.file(evt, rule, level, format, options) format = "%evt.time: "..levels[level+1].." "..format formatter = falco.formatter(format) - msg = falco.format_event(evt, formatter) + msg = falco.format_event(evt, rule, levels[level+1], formatter) file = io.open(options.filename, "a+") file:write(msg, "\n") file:close() end -function mod.syslog(evt, level, format) +function mod.syslog(evt, rule, level, format) formatter = falco.formatter(format) - msg = falco.format_event(evt, formatter) + msg = falco.format_event(evt, rule, levels[level+1], formatter) falco.syslog(level, msg) end -function mod.event(event, level, format) +function mod.event(event, rule, level, format) for index,o in ipairs(outputs) do - o.output(event, level, format, o.config) + o.output(event, rule, level, format, o.config) end end diff --git a/userspace/falco/lua/rule_loader.lua b/userspace/falco/lua/rule_loader.lua index 6f07e701a7d..8bb55edf864 100644 --- a/userspace/falco/lua/rule_loader.lua +++ b/userspace/falco/lua/rule_loader.lua @@ -256,7 +256,7 @@ function on_event(evt_, rule_id) rule_output_counts.by_name[rule.rule] = rule_output_counts.by_name[rule.rule] + 1 end - output.event(evt_, rule.level, rule.output) + output.event(evt_, rule.rule, rule.level, rule.output) end function print_stats() From 995e61210e1e6f4e36ee070d8a10aa1e52b3b2c9 Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Tue, 7 Jun 2016 13:35:27 -0700 Subject: [PATCH 19/20] Add regression tests for json output. Modify falco_test.py to look for a boolean multiplex attribute 'json_output'. If true, examine the lines of the output and for any line that begins with '{', parse it as json and ensure it has the 4 attributes we expect. Modify run_regression_tests to have a utility function prepare_multiplex_fileset that does the work of looping over files in a directory, along with detect, level, and json output arguments. The appropriate multiplex attributes are added for each file. Use that utility function to test json output for the positive and informational directories along with non-json output. The negative directory is only tested once. --- test/falco_test.py | 15 ++++++++++-- test/run_regression_tests.sh | 45 ++++++++++++++++-------------------- 2 files changed, 33 insertions(+), 27 deletions(-) diff --git a/test/falco_test.py b/test/falco_test.py index a2ff0847080..adb35767598 100644 --- a/test/falco_test.py +++ b/test/falco_test.py @@ -2,6 +2,7 @@ import os import re +import json from avocado import Test from avocado.utils import process @@ -17,6 +18,7 @@ def setUp(self): self.should_detect = self.params.get('detect', '*') self.trace_file = self.params.get('trace_file', '*') + self.json_output = self.params.get('json_output', '*') if self.should_detect: self.detect_level = self.params.get('detect_level', '*') @@ -35,8 +37,8 @@ def test(self): self.log.info("Trace file %s", self.trace_file) # Run the provided trace file though falco - cmd = '{}/userspace/falco/falco -r {}/../rules/falco_rules.yaml -c {}/../falco.yaml -e {}'.format( - self.falcodir, self.falcodir, self.falcodir, self.trace_file) + cmd = '{}/userspace/falco/falco -r {}/../rules/falco_rules.yaml -c {}/../falco.yaml -e {} -o json_output={}'.format( + self.falcodir, self.falcodir, self.falcodir, self.trace_file, self.json_output) self.falco_proc = process.SubProcess(cmd) @@ -71,6 +73,15 @@ def test(self): if not events_detected > 0: self.fail("Detected {} events at level {} when should have detected > 0".format(events_detected, self.detect_level)) + if self.json_output: + # Just verify that any lines starting with '{' are valid json objects. + # Doesn't do any deep inspection of the contents. + for line in res.stdout.splitlines(): + if line.startswith('{'): + obj = json.loads(line) + for attr in ['time', 'rule', 'priority', 'output']: + if not attr in obj: + self.fail("Falco JSON object {} does not contain property \"{}\"".format(line, attr)) pass diff --git a/test/run_regression_tests.sh b/test/run_regression_tests.sh index 1057ab61f1d..b46646a1653 100755 --- a/test/run_regression_tests.sh +++ b/test/run_regression_tests.sh @@ -13,40 +13,35 @@ function download_trace_files() { done } -function prepare_multiplex_file() { - echo "trace_files: !mux" > $MULT_FILE +function prepare_multiplex_fileset() { - for trace in $SCRIPTDIR/traces-positive/*.scap ; do - [ -e "$trace" ] || continue - NAME=`basename $trace .scap` - cat << EOF >> $MULT_FILE - $NAME: - detect: True - detect_level: Warning - trace_file: $trace -EOF - done + dir=$1 + detect=$2 + detect_level=$3 + json_output=$4 - for trace in $SCRIPTDIR/traces-negative/*.scap ; do + for trace in $SCRIPTDIR/$dir/*.scap ; do [ -e "$trace" ] || continue NAME=`basename $trace .scap` cat << EOF >> $MULT_FILE - $NAME: - detect: False + $NAME-detect-$detect-json-$json_output: + detect: $detect + detect_level: $detect_level trace_file: $trace + json_output: $json_output EOF done +} - for trace in $SCRIPTDIR/traces-info/*.scap ; do - [ -e "$trace" ] || continue - NAME=`basename $trace .scap` - cat << EOF >> $MULT_FILE - $NAME: - detect: True - detect_level: Informational - trace_file: $trace -EOF - done +function prepare_multiplex_file() { + echo "trace_files: !mux" > $MULT_FILE + + prepare_multiplex_fileset traces-positive True Warning False + prepare_multiplex_fileset traces-negative False Warning True + prepare_multiplex_fileset traces-info True Informational False + + prepare_multiplex_fileset traces-positive True Warning True + prepare_multiplex_fileset traces-info True Informational True echo "Contents of $MULT_FILE:" cat $MULT_FILE From b8cd89757a03f8e297ea218487ee5685c0ed2857 Mon Sep 17 00:00:00 2001 From: Mark Stemm Date: Tue, 7 Jun 2016 14:20:06 -0700 Subject: [PATCH 20/20] Add release notes for 0.2.0. Noting changes since 0.1.0. --- CHANGELOG.md | 21 +++++++++++++++++++++ README.md | 2 +- 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 3ecd6b72579..af721f54084 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,27 @@ This file documents all notable changes to Falco. The release numbering uses [semantic versioning](http://semver.org). +## v0.2.0 + +Released 2016-06-09 + +For full handling of setsid system calls and session id tracking using `proc.sname`, falco requires a sysdig version >= 0.10.0. + +### Major Changes + +- Add TravisCI regression tests. Testing involves a variety of positive, negative, and informational trace files with both plain and json output. [[#76](https://github.com/draios/falco/pull/76)] [[#83](https://github.com/draios/falco/pull/83)] +- Fairly big rework of ruleset to improve coverage, reduce false positives, and handle installation environments effectively [[#83](https://github.com/draios/falco/pull/83)] [[#87](https://github.com/draios/falco/pull/87)] +- Not directly a code change, but mentioning it here--the Wiki has now been populated with an initial set of articles, migrating content from the README and adding detail when necessary. [[#90](https://github.com/draios/falco/pull/90)] + +### Minor Changes + +- Improve JSON output to include the rule name, full output string, time, and severity [[#89](https://github.com/draios/falco/pull/89)] + +### Bug Fixes + +- Improve CMake quote handling [[#84](https://github.com/draios/falco/pull/84)] +- Remove unnecessary NULL check of a delete [[#85](https://github.com/draios/falco/pull/85)] + ## v0.1.0 Released 2016-05-17 diff --git a/README.md b/README.md index c91c7590682..a1e871b970c 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ ####Latest release -**v0.1.0** +**v0.2.0** Read the [change log](https://github.com/draios/falco/blob/dev/CHANGELOG.md) Dev Branch: [![Build Status](https://travis-ci.org/draios/falco.svg?branch=dev)](https://travis-ci.org/draios/falco)