From e201baf5cd9dd42f15b857a12df67cc2909f371d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?No=C3=A9mi=20V=C3=A1nyi?= Date: Fri, 28 Sep 2018 17:32:57 +0200 Subject: [PATCH] Initialize Journalbeat (#8277) This is the first PR to initialize Journalbeat with minimal functionality. The architecture is mimicing Filebeat so it can be merged into FB in the future. It means it has multiple inputs which can share configuration (`backoff`, `backoff_factor`, etc.). Inputs can have multiple readers, each reader reads from a journal specified in the list of `paths`. The readers are not going to implement the interface `Harverster` until it's merged into Filebeat, because it would overcomplicate event publishing unnecessarily and would need to duplicate too much Filebeat code. Checkpointing is copied from Winlogbeat. Once the new registry file is merged, it will be migrated. Example configuration to read from the beginning of the local journal ```yml journalbeat.inputs: - paths: [] seek: head ``` Features * read from local journal, journal file and directory * position tracking by using check-pointing as it's done in Winlogbeat * seek to "tail", "head", "cursor" * minimal E2E tests * fields.yml and documentation Vendored: * github.com/coreos/go-systemd/sdjournal --- .travis.yml | 12 +- NOTICE.txt | 30 + journalbeat/.gitignore | 9 + journalbeat/Makefile | 17 + journalbeat/README.md | 5 + journalbeat/_meta/beat.yml | 45 + journalbeat/_meta/fields.common.yml | 299 +++++ .../kibana/5.x/index-pattern/journalbeat.json | 6 + .../default/index-pattern/journalbeat.json | 16 + journalbeat/beater/journalbeat.go | 120 ++ journalbeat/checkpoint/checkpoint.go | 290 +++++ journalbeat/checkpoint/checkpoint_test.go | 218 ++++ journalbeat/checkpoint/file_unix.go | 26 + journalbeat/checkpoint/file_windows.go | 54 + journalbeat/cmd/root.go | 30 + journalbeat/config/config.go | 46 + journalbeat/config/config_test.go | 20 + journalbeat/docs/fields.asciidoc | 1085 ++++++++++++++++ journalbeat/docs/index.asciidoc | 5 + journalbeat/include/fields.go | 35 + journalbeat/input/config.go | 64 + journalbeat/input/input.go | 179 +++ journalbeat/journalbeat.reference.yml | 1155 +++++++++++++++++ journalbeat/journalbeat.yml | 153 +++ journalbeat/magefile.go | 115 ++ journalbeat/main.go | 30 + journalbeat/main_test.go | 44 + journalbeat/reader/fields.go | 70 + journalbeat/reader/journal.go | 264 ++++ .../tests/system/config/journalbeat.yml.j2 | 81 ++ journalbeat/tests/system/input/test.journal | Bin 0 -> 8388608 bytes journalbeat/tests/system/input/test.registry | 6 + journalbeat/tests/system/journalbeat.py | 13 + journalbeat/tests/system/test_base.py | 114 ++ vendor/github.com/coreos/go-systemd/LICENSE | 191 +++ vendor/github.com/coreos/go-systemd/NOTICE | 5 + .../coreos/go-systemd/sdjournal/functions.go | 66 + .../coreos/go-systemd/sdjournal/journal.go | 1120 ++++++++++++++++ .../coreos/go-systemd/sdjournal/read.go | 272 ++++ vendor/github.com/coreos/pkg/CONTRIBUTING.md | 71 + vendor/github.com/coreos/pkg/DCO | 36 + vendor/github.com/coreos/pkg/LICENSE | 202 +++ vendor/github.com/coreos/pkg/MAINTAINERS | 1 + vendor/github.com/coreos/pkg/NOTICE | 5 + vendor/github.com/coreos/pkg/README.md | 4 + vendor/github.com/coreos/pkg/build.sh | 3 + .../github.com/coreos/pkg/code-of-conduct.md | 61 + vendor/github.com/coreos/pkg/dlopen/dlopen.go | 82 ++ .../coreos/pkg/dlopen/dlopen_example.go | 56 + vendor/github.com/coreos/pkg/test.sh | 56 + vendor/vendor.json | 18 + 51 files changed, 6903 insertions(+), 2 deletions(-) create mode 100644 journalbeat/.gitignore create mode 100644 journalbeat/Makefile create mode 100644 journalbeat/README.md create mode 100644 journalbeat/_meta/beat.yml create mode 100644 journalbeat/_meta/fields.common.yml create mode 100644 journalbeat/_meta/kibana/5.x/index-pattern/journalbeat.json create mode 100644 journalbeat/_meta/kibana/default/index-pattern/journalbeat.json create mode 100644 journalbeat/beater/journalbeat.go create mode 100644 journalbeat/checkpoint/checkpoint.go create mode 100644 journalbeat/checkpoint/checkpoint_test.go create mode 100644 journalbeat/checkpoint/file_unix.go create mode 100644 journalbeat/checkpoint/file_windows.go create mode 100644 journalbeat/cmd/root.go create mode 100644 journalbeat/config/config.go create mode 100644 journalbeat/config/config_test.go create mode 100644 journalbeat/docs/fields.asciidoc create mode 100644 journalbeat/docs/index.asciidoc create mode 100644 journalbeat/include/fields.go create mode 100644 journalbeat/input/config.go create mode 100644 journalbeat/input/input.go create mode 100644 journalbeat/journalbeat.reference.yml create mode 100644 journalbeat/journalbeat.yml create mode 100644 journalbeat/magefile.go create mode 100644 journalbeat/main.go create mode 100644 journalbeat/main_test.go create mode 100644 journalbeat/reader/fields.go create mode 100644 journalbeat/reader/journal.go create mode 100644 journalbeat/tests/system/config/journalbeat.yml.j2 create mode 100644 journalbeat/tests/system/input/test.journal create mode 100644 journalbeat/tests/system/input/test.registry create mode 100644 journalbeat/tests/system/journalbeat.py create mode 100644 journalbeat/tests/system/test_base.py create mode 100644 vendor/github.com/coreos/go-systemd/LICENSE create mode 100644 vendor/github.com/coreos/go-systemd/NOTICE create mode 100644 vendor/github.com/coreos/go-systemd/sdjournal/functions.go create mode 100644 vendor/github.com/coreos/go-systemd/sdjournal/journal.go create mode 100644 vendor/github.com/coreos/go-systemd/sdjournal/read.go create mode 100644 vendor/github.com/coreos/pkg/CONTRIBUTING.md create mode 100644 vendor/github.com/coreos/pkg/DCO create mode 100644 vendor/github.com/coreos/pkg/LICENSE create mode 100644 vendor/github.com/coreos/pkg/MAINTAINERS create mode 100644 vendor/github.com/coreos/pkg/NOTICE create mode 100644 vendor/github.com/coreos/pkg/README.md create mode 100755 vendor/github.com/coreos/pkg/build.sh create mode 100644 vendor/github.com/coreos/pkg/code-of-conduct.md create mode 100644 vendor/github.com/coreos/pkg/dlopen/dlopen.go create mode 100644 vendor/github.com/coreos/pkg/dlopen/dlopen_example.go create mode 100755 vendor/github.com/coreos/pkg/test.sh diff --git a/.travis.yml b/.travis.yml index 419c1357ba32..5a09e2eff40a 100644 --- a/.travis.yml +++ b/.travis.yml @@ -109,6 +109,12 @@ jobs: go: $GO_VERSION stage: test + # Journalbeat + - os: linux + env: TARGETS="-C journalbeat testsuite" + go: $GO_VERSION + stage: test + # Generators - os: linux env: TARGETS="-C generator/metricbeat test" @@ -156,10 +162,12 @@ addons: apt: update: true packages: - - python-virtualenv + - libc6-dev-i386 - libpcap-dev - - xsltproc + - libsystemd-journal-dev - libxml2-utils + - python-virtualenv + - xsltproc before_install: - python --version diff --git a/NOTICE.txt b/NOTICE.txt index e6cabf112866..402d1ce82e06 100644 --- a/NOTICE.txt +++ b/NOTICE.txt @@ -231,6 +231,36 @@ COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +-------------------------------------------------------------------- +Dependency: github.com/coreos/go-systemd +Revision: eee3db372b31153ca0b90702e165948699803fd0 +License type (autodetected): Apache-2.0 +./vendor/github.com/coreos/go-systemd/LICENSE: +-------------------------------------------------------------------- +Apache License 2.0 + +-------NOTICE----- +CoreOS Project +Copyright 2018 CoreOS, Inc + +This product includes software developed at CoreOS, Inc. +(http://www.coreos.com/). + +-------------------------------------------------------------------- +Dependency: github.com/coreos/pkg +Revision: 97fdf19511ea361ae1c100dd393cc47f8dcfa1e1 +License type (autodetected): Apache-2.0 +./vendor/github.com/coreos/pkg/LICENSE: +-------------------------------------------------------------------- +Apache License 2.0 + +-------NOTICE----- +CoreOS Project +Copyright 2014 CoreOS, Inc + +This product includes software developed at CoreOS, Inc. +(http://www.coreos.com/). + -------------------------------------------------------------------- Dependency: github.com/davecgh/go-spew Version: v1.1.0 diff --git a/journalbeat/.gitignore b/journalbeat/.gitignore new file mode 100644 index 000000000000..504279e50ee3 --- /dev/null +++ b/journalbeat/.gitignore @@ -0,0 +1,9 @@ +/.idea +/build +.DS_Store +.journalbeat_position +/journalbeat +/journalbeat.test +*.pyc +data/meta.json +/*.journal diff --git a/journalbeat/Makefile b/journalbeat/Makefile new file mode 100644 index 000000000000..c741612e790b --- /dev/null +++ b/journalbeat/Makefile @@ -0,0 +1,17 @@ +BEAT_NAME=journalbeat +BEAT_TITLE=Journalbeat +SYSTEM_TESTS=false +TEST_ENVIRONMENT=false +ES_BEATS?=.. +GOX_FLAGS=-cgo +GOX_OS=linux + +# Path to the libbeat Makefile +-include $(ES_BEATS)/libbeat/scripts/Makefile + +.PHONY: before-build +before-build: + +# Collects all dependencies and then calls update +.PHONY: collect +collect: diff --git a/journalbeat/README.md b/journalbeat/README.md new file mode 100644 index 000000000000..69fccb38f904 --- /dev/null +++ b/journalbeat/README.md @@ -0,0 +1,5 @@ +# Journalbeat + +Journalbeat is an open source data collector to read and forward journal entries from Linuxes with systemd. + +## Getting started diff --git a/journalbeat/_meta/beat.yml b/journalbeat/_meta/beat.yml new file mode 100644 index 000000000000..724686a91436 --- /dev/null +++ b/journalbeat/_meta/beat.yml @@ -0,0 +1,45 @@ +###################### Journalbeat Configuration Example ######################### + +# This file is an example configuration file highlighting only the most common +# options. The journalbeat.reference.yml file from the same directory contains all the +# supported options with more comments. You can use it as a reference. +# +# You can find the full configuration reference here: +# https://www.elastic.co/guide/en/beats/journalbeat/index.html + +# For more available modules and options, please see the journalbeat.reference.yml sample +# configuration file. + +#=========================== Journalbeat inputs ============================= + +journalbeat.inputs: + # Paths that should be crawled and fetched. Possible values files and directories. + # When setting a directory, all journals under it are merged. + # When empty starts to read from local journal. +- paths: [] + + # The number of seconds to wait before trying to read again from journals. + #backoff: 1s + # Multiplier of backoff value. + #backoff_factor: 2 + # The maximum number of seconds to wait before attempting to read again from journals. + #max_backoff: 60s + + # Position to start reading from journal. Valid values: head, tail, cursor + seek: tail + +#========================= Journalbeat global options ============================ +#journalbeat: + # Name of the registry file. If a relative path is used, it is considered relative to the + # data path. + #registry_file: registry + + # The number of seconds to wait before trying to read again from journals. + #backoff: 1s + # Multiplier of backoff value. + #backoff_factor: 2 + # The maximum number of seconds to wait before attempting to read again from journals. + #max_backoff: 60s + + # Position to start reading from all journal. Possible values: head, tail, cursor + #seek: head diff --git a/journalbeat/_meta/fields.common.yml b/journalbeat/_meta/fields.common.yml new file mode 100644 index 000000000000..23378ad11587 --- /dev/null +++ b/journalbeat/_meta/fields.common.yml @@ -0,0 +1,299 @@ +- key: common + title: "Common Journalbeat" + description: > + Contains common fields available in all event types. + fields: + - name: coredump + type: group + description: > + Fields used by systemd-coredump kernel helper. + fields: + - name: unit + type: keyword + description: > + Annotations of messages containing coredumps from system units. + - name: user_unit + type: keyword + description: > + Annotations of messages containing coredumps from user units. + - name: object + type: group + description: > + Fields to log on behalf of a different program. + fields: + - name: audit + type: group + description: > + Audit fields of event. + fields: + - name: loginuid + type: long + required: false + example: 1000 + description: > + The login UID of the object process. + - name: session + type: long + required: false + example: 3 + description: > + The audit session of the object process. + - name: cmd + type: keyword + required: false + example: "/lib/systemd/systemd --user" + description: > + The command line of the process. + - name: name + type: keyword + required: false + example: "/lib/systemd/systemd" + description: > + Name of the executable. + - name: executable + type: keyword + required: false + description: > + Path to the the executable. + example: "/lib/systemd/systemd" + - name: uid + type: long + required: false + description: > + UID of the object process. + - name: gid + type: long + required: false + description: > + GID of the object process. + - name: pid + type: long + required: false + description: > + PID of the object process. + - name: systemd + type: group + description: > + Systemd fields of event. + fields: + - name: owner_uid + type: long + required: false + description: > + The UID of the owner. + - name: session + type: keyword + required: false + description: > + The ID of the systemd session. + - name: unit + type: keyword + required: false + description: > + The name of the systemd unit. + - name: user_unit + type: keyword + required: false + description: > + The name of the systemd user unit. + - name: kernel + type: group + description: > + Fields to log on behalf of a different program. + fields: + - name: device + type: keyword + required: false + description: > + The kernel device name. + - name: subsystem + type: keyword + required: false + description: > + The kernel subsystem name. + - name: device_symlinks + type: text + required: false + description: > + Additional symlink names pointing to the device node in /dev. + - name: device_node_path + type: text + required: false + description: > + The device node path of this device in /dev. + - name: device_name + type: text + required: false + description: > + The kernel device name as it shows up in the device tree below /sys. + - name: process + type: group + description: > + Fields to log on behalf of a different program. + fields: + - name: audit + type: group + description: > + Audit fields of event. + fields: + - name: loginuid + type: long + required: false + example: 1000 + description: > + The login UID of the source process. + - name: session + type: long + required: false + example: 3 + description: > + The audit session of the source process. + - name: cmd + type: keyword + required: false + example: "/lib/systemd/systemd --user" + description: > + The command line of the process. + - name: name + type: keyword + required: false + example: "/lib/systemd/systemd" + description: > + Name of the executable. + - name: executable + type: keyword + required: false + description: > + Path to the the executable. + example: "/lib/systemd/systemd" + - name: pid + type: long + required: false + example: 1 + description: > + The ID of the process which logged the message. + - name: gid + type: long + required: false + example: 1 + description: > + The ID of the group which runs the process. + - name: uid + type: long + required: false + example: 1 + description: > + The ID of the user which runs the process. + - name: systemd + type: group + description: > + Fields of systemd. + fields: + - name: invocation_id + type: keyword + required: false + example: "8450f1672de646c88cd133aadd4f2d70" + description: > + The invocation ID for the runtime cycle of the unit the message was generated in. + - name: cgroup + type: keyword + required: false + example: "/user.slice/user-1234.slice/session-2.scope" + description: > + The control group path in the systemd hierarchy. + - name: owner_uid + type: long + required: false + description: > + The owner UID of the systemd user unit or systemd session. + - name: session + type: keyword + required: false + description: > + The ID of the systemd session. + - name: slice + type: keyword + required: false + example: "user-1234.slice" + description: > + The systemd slice unit. + - name: unit + type: keyword + required: false + example: "nginx.service" + description: > + The name of the systemd unit. + - name: user_unit + type: keyword + required: false + example: "user-1234.slice" + description: > + The name of the systemd user unit. + - name: transport + type: keyword + required: true + example: "syslog" + description: > + How the log message was received by journald. + - name: host + type: group + description: > + Fields of the host. + fields: + - name: boot_id + type: text + required: false + example: "dd8c974asdf01dbe2ef26d7fasdf264c9" + description: > + The boot ID for the boot the log was generated in. + - name: code + type: group + description: > + Fields of the code generating the event. + fields: + - name: file + type: text + required: false + example: "../src/core/manager.c" + description: > + The name of the source file where the log is generated. + - name: function + type: text + required: false + example: "job_log_status_message" + description: > + The name of the function which generated the log message. + - name: line + type: long + required: false + example: 123 + description: > + The line number of the code which generated the log message. + - name: syslog + type: group + description: > + Fields of the code generating the event. + fields: + - name: priority + type: long + required: false + example: 1 + description: > + The priority of the message. A syslog compatibility field. + - name: facility + type: long + required: false + example: 1 + description: > + The facility of the message. A syslog compatibility field. + - name: identifier + type: text + required: false + example: "su" + description: > + The identifier of the message. A syslog compatibility field. + - name: message + type: text + required: true + description: > + The logged message. diff --git a/journalbeat/_meta/kibana/5.x/index-pattern/journalbeat.json b/journalbeat/_meta/kibana/5.x/index-pattern/journalbeat.json new file mode 100644 index 000000000000..251aa05c6dbf --- /dev/null +++ b/journalbeat/_meta/kibana/5.x/index-pattern/journalbeat.json @@ -0,0 +1,6 @@ +{ + "fields": "[{\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"beat.name\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"beat.hostname\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"beat.timezone\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"beat.version\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"@timestamp\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"date\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"tags\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"fields\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": false, \"name\": \"error.message\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"error.code\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"number\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"error.type\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.provider\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.instance_id\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.instance_name\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.machine_type\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.availability_zone\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.project_id\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.region\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"docker.container.id\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"docker.container.image\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"docker.container.name\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"docker.container.labels\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"kubernetes.pod.name\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"kubernetes.namespace\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"kubernetes.labels\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"kubernetes.annotations\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"kubernetes.container.name\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"kubernetes.container.image\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"counter\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"number\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": false, \"name\": \"_id\", \"searchable\": false, \"indexed\": false, \"doc_values\": false, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"_type\", \"searchable\": true, \"indexed\": false, \"doc_values\": false, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": false, \"name\": \"_index\", \"searchable\": false, \"indexed\": false, \"doc_values\": false, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": false, \"name\": \"_score\", \"searchable\": false, \"indexed\": false, \"doc_values\": false, \"type\": \"number\", \"scripted\": false}]", + "fieldFormatMap": "{\"@timestamp\": {\"id\": \"date\"}}", + "timeFieldName": "@timestamp", + "title": "journalbeat-*" +} \ No newline at end of file diff --git a/journalbeat/_meta/kibana/default/index-pattern/journalbeat.json b/journalbeat/_meta/kibana/default/index-pattern/journalbeat.json new file mode 100644 index 000000000000..55e70bd81d12 --- /dev/null +++ b/journalbeat/_meta/kibana/default/index-pattern/journalbeat.json @@ -0,0 +1,16 @@ +{ + "version": "7.0.0-alpha1", + "objects": [ + { + "attributes": { + "fields": "[{\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"beat.name\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"beat.hostname\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"beat.timezone\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"beat.version\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"@timestamp\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"date\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"tags\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"fields\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": false, \"name\": \"error.message\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"error.code\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"number\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"error.type\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.provider\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.instance_id\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.instance_name\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.machine_type\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.availability_zone\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.project_id\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"meta.cloud.region\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"docker.container.id\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"docker.container.image\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"docker.container.name\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"docker.container.labels\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"kubernetes.pod.name\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"kubernetes.namespace\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"kubernetes.labels\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"kubernetes.annotations\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"kubernetes.container.name\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"kubernetes.container.image\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"counter\", \"searchable\": true, \"indexed\": true, \"doc_values\": true, \"type\": \"number\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": false, \"name\": \"_id\", \"searchable\": false, \"indexed\": false, \"doc_values\": false, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": true, \"name\": \"_type\", \"searchable\": true, \"indexed\": false, \"doc_values\": false, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": false, \"name\": \"_index\", \"searchable\": false, \"indexed\": false, \"doc_values\": false, \"type\": \"string\", \"scripted\": false}, {\"count\": 0, \"analyzed\": false, \"aggregatable\": false, \"name\": \"_score\", \"searchable\": false, \"indexed\": false, \"doc_values\": false, \"type\": \"number\", \"scripted\": false}]", + "fieldFormatMap": "{\"@timestamp\": {\"id\": \"date\"}}", + "timeFieldName": "@timestamp", + "title": "journalbeat-*" + }, + "version": 1, + "type": "index-pattern", + "id": "journalbeat-*" + } + ] +} \ No newline at end of file diff --git a/journalbeat/beater/journalbeat.go b/journalbeat/beater/journalbeat.go new file mode 100644 index 000000000000..77ae628acbf4 --- /dev/null +++ b/journalbeat/beater/journalbeat.go @@ -0,0 +1,120 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package beater + +import ( + "fmt" + "sync" + "time" + + "github.com/elastic/beats/journalbeat/checkpoint" + "github.com/elastic/beats/journalbeat/input" + "github.com/elastic/beats/libbeat/beat" + "github.com/elastic/beats/libbeat/common" + "github.com/elastic/beats/libbeat/logp" + + "github.com/elastic/beats/journalbeat/config" +) + +// Journalbeat instance +type Journalbeat struct { + inputs []*input.Input + done chan struct{} + config config.Config + + pipeline beat.Pipeline + checkpoint *checkpoint.Checkpoint + logger *logp.Logger +} + +// New returns a new Journalbeat instance +func New(b *beat.Beat, cfg *common.Config) (beat.Beater, error) { + config := config.DefaultConfig + if err := cfg.Unpack(&config); err != nil { + return nil, fmt.Errorf("error reading config file: %v", err) + } + + done := make(chan struct{}) + cp, err := checkpoint.NewCheckpoint(config.RegistryFile, 10, 1*time.Second) + if err != nil { + return nil, err + } + + var inputs []*input.Input + for _, c := range config.Inputs { + i, err := input.New(c, b.Publisher, done, cp.States()) + if err != nil { + return nil, err + } + inputs = append(inputs, i) + } + + bt := &Journalbeat{ + inputs: inputs, + done: done, + config: config, + pipeline: b.Publisher, + checkpoint: cp, + logger: logp.NewLogger("journalbeat"), + } + + return bt, nil +} + +// Run sets up the ACK handler and starts inputs to read and forward events to outputs. +func (bt *Journalbeat) Run(b *beat.Beat) error { + bt.logger.Info("journalbeat is running! Hit CTRL-C to stop it.") + defer bt.logger.Info("journalbeat is stopping") + + err := bt.pipeline.SetACKHandler(beat.PipelineACKHandler{ + ACKLastEvents: func(data []interface{}) { + for _, datum := range data { + if st, ok := datum.(checkpoint.JournalState); ok { + bt.checkpoint.PersistState(st) + } + } + }, + }) + if err != nil { + return err + } + defer bt.checkpoint.Shutdown() + + var wg sync.WaitGroup + for _, i := range bt.inputs { + wg.Add(1) + go bt.runInput(i, &wg) + } + + wg.Wait() + + return nil +} + +func (bt *Journalbeat) runInput(i *input.Input, wg *sync.WaitGroup) { + defer wg.Done() + i.Run() +} + +// Stop stops the beat and its inputs. +func (bt *Journalbeat) Stop() { + close(bt.done) + for _, i := range bt.inputs { + i.Stop() + } +} diff --git a/journalbeat/checkpoint/checkpoint.go b/journalbeat/checkpoint/checkpoint.go new file mode 100644 index 000000000000..f2c3bfacdabc --- /dev/null +++ b/journalbeat/checkpoint/checkpoint.go @@ -0,0 +1,290 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +// Package checkpoint persists event log state information to disk so that +// event log monitoring can resume from the last read event in the case of a +// restart or unexpected interruption. +package checkpoint + +import ( + "fmt" + "io/ioutil" + "os" + "path/filepath" + "sort" + "sync" + "time" + + "gopkg.in/yaml.v2" + + "github.com/elastic/beats/libbeat/logp" +) + +// Checkpoint persists event log state information to disk. +type Checkpoint struct { + wg sync.WaitGroup // WaitGroup used to wait on the shutdown of the checkpoint worker. + done chan struct{} // Channel for shutting down the checkpoint worker. + once sync.Once // Used to guarantee shutdown happens once. + file string // File where the state is persisted. + fileLock sync.RWMutex // Lock that protects concurrent reads/writes to file. + numUpdates int // Number of updates received since last persisting to disk. + maxUpdates int // Maximum number of updates to buffer before persisting to disk. + flushInterval time.Duration // Maximum time interval that can pass before persisting to disk. + sort []string // Slice used for sorting states map (store to save on mallocs). + + lock sync.RWMutex + states map[string]JournalState + + save chan JournalState +} + +// PersistedState represents the format of the data persisted to disk. +type PersistedState struct { + UpdateTime time.Time `yaml:"update_time"` + States []JournalState `yaml:"journal_entries"` +} + +// JournalState represents the state of an individual event log. +type JournalState struct { + Path string `yaml:"path"` + Cursor string `yaml:"cursor"` + RealtimeTimestamp uint64 `yaml:"realtime_timestamp"` + MonotonicTimestamp uint64 `yaml:"monotonic_timestamp"` +} + +// NewCheckpoint creates and returns a new Checkpoint. This method loads state +// information from disk if it exists and starts a goroutine for persisting +// state information to disk. Shutdown should be called when finished to +// guarantee any in-memory state information is flushed to disk. +// +// file is the name of the file where event log state is persisted as YAML. +// maxUpdates is the maximum number of updates checkpoint will accept before +// triggering a flush to disk. interval is maximum amount of time that can +// pass since the last flush before triggering a flush to disk (minimum value +// is 1s). +func NewCheckpoint(file string, maxUpdates int, interval time.Duration) (*Checkpoint, error) { + c := &Checkpoint{ + done: make(chan struct{}), + file: file, + maxUpdates: maxUpdates, + flushInterval: interval, + sort: make([]string, 0, 10), + states: make(map[string]JournalState), + save: make(chan JournalState, 1), + } + + // Minimum batch size. + if c.maxUpdates < 1 { + c.maxUpdates = 1 + } + + // Minimum flush interval. + if c.flushInterval < time.Second { + c.flushInterval = time.Second + } + + // Read existing state information: + ps, err := c.read() + if err != nil { + return nil, err + } + + if ps != nil { + for _, state := range ps.States { + c.states[state.Path] = state + } + } + + // Write the state file to verify we have have permissions. + err = c.flush() + if err != nil { + return nil, err + } + + c.wg.Add(1) + go c.run() + return c, nil +} + +// run is worker loop that reads incoming state information from the save +// channel and persists it when the number of changes reaches maxEvents or +// the amount of time since the last disk write reaches flushInterval. +func (c *Checkpoint) run() { + defer c.wg.Done() + defer c.persist() + + flushTimer := time.NewTimer(c.flushInterval) + defer flushTimer.Stop() +loop: + for { + select { + case <-c.done: + break loop + case s := <-c.save: + c.lock.Lock() + c.states[s.Path] = s + c.lock.Unlock() + c.numUpdates++ + if c.numUpdates < c.maxUpdates { + continue + } + case <-flushTimer.C: + } + + c.persist() + flushTimer.Reset(c.flushInterval) + } +} + +// Shutdown stops the checkpoint worker (which persists any state to disk as +// it stops). This method blocks until the checkpoint worker shutdowns. Calling +// this method more once is safe and has no effect. +func (c *Checkpoint) Shutdown() { + c.once.Do(func() { + close(c.done) + c.wg.Wait() + }) +} + +// States returns the current in-memory event log state. This state information +// is bootstrapped with any data found on disk at creation time. +func (c *Checkpoint) States() map[string]JournalState { + c.lock.RLock() + defer c.lock.RUnlock() + + copy := make(map[string]JournalState) + for k, v := range c.states { + copy[k] = v + } + + return copy +} + +// Persist queues the given event log state information to be written to disk. +func (c *Checkpoint) Persist(path, cursor string, realTs, monotonicTs uint64) { + c.PersistState(JournalState{ + Path: path, + Cursor: cursor, + RealtimeTimestamp: realTs, + MonotonicTimestamp: monotonicTs, + }) +} + +// PersistState queues the given event log state to be written to disk. +func (c *Checkpoint) PersistState(st JournalState) { + c.save <- st +} + +// persist writes the current state to disk if the in-memory state is dirty. +func (c *Checkpoint) persist() bool { + if c.numUpdates == 0 { + return false + } + + err := c.flush() + if err != nil { + return false + } + + logp.Debug("checkpoint", "Checkpoint saved to disk. numUpdates=%d", + c.numUpdates) + c.numUpdates = 0 + return true +} + +// flush writes the current state to disk. +func (c *Checkpoint) flush() error { + c.fileLock.Lock() + defer c.fileLock.Unlock() + + tempFile := c.file + ".new" + file, err := create(tempFile) + if os.IsNotExist(err) { + // Try to create directory if it does not exist. + if createDirErr := c.createDir(); createDirErr == nil { + file, err = create(tempFile) + } + } + + if err != nil { + return fmt.Errorf("Failed to flush state to disk. %v", err) + } + + // Sort persisted eventLogs by name. + c.sort = c.sort[:0] + for k := range c.states { + c.sort = append(c.sort, k) + } + sort.Strings(c.sort) + + ps := PersistedState{ + UpdateTime: time.Now().UTC(), + States: make([]JournalState, len(c.sort)), + } + for i, name := range c.sort { + ps.States[i] = c.states[name] + } + + data, err := yaml.Marshal(ps) + if err != nil { + file.Close() + return fmt.Errorf("Failed to flush state to disk. Could not marshal "+ + "data to YAML. %v", err) + } + + _, err = file.Write(data) + if err != nil { + file.Close() + return fmt.Errorf("Failed to flush state to disk. Could not write to "+ + "%s. %v", tempFile, err) + } + + file.Close() + err = os.Rename(tempFile, c.file) + return err +} + +// read loads the persisted state from disk. If the file does not exists then +// the method returns nil and no error. +func (c *Checkpoint) read() (*PersistedState, error) { + c.fileLock.RLock() + defer c.fileLock.RUnlock() + + contents, err := ioutil.ReadFile(c.file) + if err != nil { + if os.IsNotExist(err) { + err = nil + } + return nil, err + } + + ps := &PersistedState{} + err = yaml.Unmarshal(contents, ps) + if err != nil { + return nil, err + } + + return ps, nil +} + +// createDir creates the directory in which the state file will reside if the +// directory does not already exist. +func (c *Checkpoint) createDir() error { + dir := filepath.Dir(c.file) + logp.Info("Creating %s if it does not exist.", dir) + return os.MkdirAll(dir, os.FileMode(0750)) +} diff --git a/journalbeat/checkpoint/checkpoint_test.go b/journalbeat/checkpoint/checkpoint_test.go new file mode 100644 index 000000000000..a5edeb83eadb --- /dev/null +++ b/journalbeat/checkpoint/checkpoint_test.go @@ -0,0 +1,218 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +// +build !integration + +package checkpoint + +import ( + "io/ioutil" + "os" + "path/filepath" + "runtime" + "testing" + "time" + + "github.com/stretchr/testify/assert" +) + +func eventually(t *testing.T, predicate func() (bool, error), timeout time.Duration) { + const minInterval = time.Millisecond * 5 + const maxInterval = time.Millisecond * 500 + + checkInterval := timeout / 100 + if checkInterval < minInterval { + checkInterval = minInterval + } + if checkInterval > maxInterval { + checkInterval = maxInterval + } + for deadline, first := time.Now().Add(timeout), true; first || time.Now().Before(deadline); first = false { + ok, err := predicate() + if err != nil { + t.Fatal("predicate failed with error:", err) + return + } + if ok { + return + } + time.Sleep(checkInterval) + } + t.Fatal("predicate is not true after", timeout) +} + +// Test that a write is triggered when the maximum number of updates is reached. +func TestWriteMaxUpdates(t *testing.T) { + dir, err := ioutil.TempDir("", "wlb-checkpoint-test") + if err != nil { + t.Fatal(err) + } + defer func() { + err := os.RemoveAll(dir) + if err != nil { + t.Fatal(err) + } + }() + + file := filepath.Join(dir, "some", "new", "dir", ".winlogbeat.yml") + if !assert.False(t, fileExists(file), "%s should not exist", file) { + return + } + + cp, err := NewCheckpoint(file, 2, time.Hour) + if err != nil { + t.Fatal(err) + } + defer cp.Shutdown() + + // Send update - it's not written to disk but it's in memory. + cp.Persist("", "", 123456, 123456) + found := false + eventually(t, func() (bool, error) { + _, found = cp.States()[""] + return found, nil + }, time.Second*15) + assert.True(t, found) + + ps, err := cp.read() + if err != nil { + t.Fatal("read failed", err) + } + assert.Len(t, ps.States, 0) + + // Send update - it is written to disk. + cp.Persist("", "", 123456, 123456) + eventually(t, func() (bool, error) { + ps, err = cp.read() + return ps != nil && len(ps.States) > 0, err + }, time.Second*15) + + if assert.Len(t, ps.States, 1, "state not written, could be a flush timing issue, retry") { + assert.Equal(t, "", ps.States[0].Path) + assert.Equal(t, "", ps.States[0].Cursor) + } +} + +// Test that a write is triggered when the maximum time period since the last +// write is reached. +func TestWriteTimedFlush(t *testing.T) { + dir, err := ioutil.TempDir("", "wlb-checkpoint-test") + if err != nil { + t.Fatal(err) + } + defer func() { + err := os.RemoveAll(dir) + if err != nil { + t.Fatal(err) + } + }() + + file := filepath.Join(dir, ".winlogbeat.yml") + if !assert.False(t, fileExists(file), "%s should not exist", file) { + return + } + + cp, err := NewCheckpoint(file, 100, time.Second) + if err != nil { + t.Fatal(err) + } + defer cp.Shutdown() + + // Send update then wait longer than the flush interval and it should be + // on disk. + cp.Persist("", "cursor", 123456, 123456) + eventually(t, func() (bool, error) { + ps, err := cp.read() + return ps != nil && len(ps.States) > 0, err + }, time.Second*15) + + ps, err := cp.read() + if err != nil { + t.Fatal("read failed", err) + } + if assert.Len(t, ps.States, 1) { + assert.Equal(t, "", ps.States[0].Path) + assert.Equal(t, "cursor", ps.States[0].Cursor) + } +} + +// Test that createDir creates the directory with 0750 permissions. +func TestCreateDir(t *testing.T) { + dir, err := ioutil.TempDir("", "wlb-checkpoint-test") + if err != nil { + t.Fatal(err) + } + defer func() { + err := os.RemoveAll(dir) + if err != nil { + t.Fatal(err) + } + }() + + stateDir := filepath.Join(dir, "state", "dir", "does", "not", "exists") + file := filepath.Join(stateDir, ".winlogbeat.yml") + cp := &Checkpoint{file: file} + + if !assert.False(t, fileExists(file), "%s should not exist", file) { + return + } + if err = cp.createDir(); err != nil { + t.Fatal("createDir", err) + } + if !assert.True(t, fileExists(stateDir), "%s should exist", file) { + return + } + + // mkdir on Windows does not pass the POSIX mode to the CreateDirectory + // syscall so doesn't test the mode. + if runtime.GOOS != "windows" { + fileInfo, err := os.Stat(stateDir) + if assert.NoError(t, err) { + assert.Equal(t, true, fileInfo.IsDir()) + assert.Equal(t, os.FileMode(0750), fileInfo.Mode().Perm()) + } + } +} + +// Test createDir when the directory already exists to verify that no error is +// returned. +func TestCreateDirAlreadyExists(t *testing.T) { + dir, err := ioutil.TempDir("", "wlb-checkpoint-test") + if err != nil { + t.Fatal(err) + } + defer func() { + err := os.RemoveAll(dir) + if err != nil { + t.Fatal(err) + } + }() + + file := filepath.Join(dir, ".winlogbeat.yml") + cp := &Checkpoint{file: file} + + if !assert.True(t, fileExists(dir), "%s should exist", file) { + return + } + assert.NoError(t, cp.createDir()) +} + +// fileExists returns true if the specified file exists. +func fileExists(file string) bool { + _, err := os.Stat(file) + return !os.IsNotExist(err) +} diff --git a/journalbeat/checkpoint/file_unix.go b/journalbeat/checkpoint/file_unix.go new file mode 100644 index 000000000000..ee3201d61005 --- /dev/null +++ b/journalbeat/checkpoint/file_unix.go @@ -0,0 +1,26 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +// +build !windows + +package checkpoint + +import "os" + +func create(path string) (*os.File, error) { + return os.OpenFile(path, os.O_RDWR|os.O_CREATE|os.O_TRUNC|os.O_SYNC, 0600) +} diff --git a/journalbeat/checkpoint/file_windows.go b/journalbeat/checkpoint/file_windows.go new file mode 100644 index 000000000000..6644d7510960 --- /dev/null +++ b/journalbeat/checkpoint/file_windows.go @@ -0,0 +1,54 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package checkpoint + +import ( + "os" + "syscall" +) + +const ( + _FILE_FLAG_WRITE_THROUGH = 0x80000000 +) + +func create(path string) (*os.File, error) { + return createWriteThroughFile(path) +} + +// createWriteThroughFile creates a file whose write operations do not go +// through any intermediary cache, they go directly to disk. +func createWriteThroughFile(path string) (*os.File, error) { + if len(path) == 0 { + return nil, syscall.ERROR_FILE_NOT_FOUND + } + pathp, err := syscall.UTF16PtrFromString(path) + if err != nil { + return nil, err + } + + h, err := syscall.CreateFile( + pathp, // Path + syscall.GENERIC_READ|syscall.GENERIC_WRITE, // Access Mode + uint32(syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE), // Share Mode + nil, // Security Attributes + syscall.CREATE_ALWAYS, // Create Mode + uint32(syscall.FILE_ATTRIBUTE_NORMAL|_FILE_FLAG_WRITE_THROUGH), // Flags and Attributes + 0) // Template File + + return os.NewFile(uintptr(h), path), err +} diff --git a/journalbeat/cmd/root.go b/journalbeat/cmd/root.go new file mode 100644 index 000000000000..d1afa4fdfcce --- /dev/null +++ b/journalbeat/cmd/root.go @@ -0,0 +1,30 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package cmd + +import ( + "github.com/elastic/beats/journalbeat/beater" + + cmd "github.com/elastic/beats/libbeat/cmd" +) + +// Name of this beat +var Name = "journalbeat" + +// RootCmd to handle beats cli +var RootCmd = cmd.GenRootCmd(Name, "", beater.New) diff --git a/journalbeat/config/config.go b/journalbeat/config/config.go new file mode 100644 index 000000000000..d502abe608a2 --- /dev/null +++ b/journalbeat/config/config.go @@ -0,0 +1,46 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +// Config is put into a different package to prevent cyclic imports in case +// it is needed in several locations + +package config + +import ( + "time" + + "github.com/elastic/beats/libbeat/common" +) + +// Config stores the configuration of Journalbeat +type Config struct { + Inputs []*common.Config `config:"inputs"` + RegistryFile string `config:"registry"` + Backoff time.Duration `config:"backoff" validate:"min=0,nonzero"` + BackoffFactor int `config:"backoff_factor" validate:"min=1"` + MaxBackoff time.Duration `config:"max_backoff" validate:"min=0,nonzero"` + Seek string `config:"seek"` +} + +// DefaultConfig are the defaults of a Journalbeat instance +var DefaultConfig = Config{ + RegistryFile: "registry", + Backoff: 1 * time.Second, + BackoffFactor: 2, + MaxBackoff: 60 * time.Second, + Seek: "tail", +} diff --git a/journalbeat/config/config_test.go b/journalbeat/config/config_test.go new file mode 100644 index 000000000000..f4373cc528be --- /dev/null +++ b/journalbeat/config/config_test.go @@ -0,0 +1,20 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +// +build !integration + +package config diff --git a/journalbeat/docs/fields.asciidoc b/journalbeat/docs/fields.asciidoc new file mode 100644 index 000000000000..107dfafd7033 --- /dev/null +++ b/journalbeat/docs/fields.asciidoc @@ -0,0 +1,1085 @@ + +//// +This file is generated! See _meta/fields.yml and scripts/generate_field_docs.py +//// + +[[exported-fields]] += Exported fields + +[partintro] + +-- +This document describes the fields that are exported by Journalbeat. They are +grouped in the following categories: + +* <> +* <> +* <> +* <> +* <> +* <> + +-- +[[exported-fields-beat]] +== Beat fields + +Contains common beat fields available in all event types. + + + +*`beat.name`*:: ++ +-- +The name of the Beat sending the log messages. If the Beat name is set in the configuration file, then that value is used. If it is not set, the hostname is used. To set the Beat name, use the `name` option in the configuration file. + + +-- + +*`beat.hostname`*:: ++ +-- +The hostname as returned by the operating system on which the Beat is running. + + +-- + +*`beat.timezone`*:: ++ +-- +The timezone as returned by the operating system on which the Beat is running. + + +-- + +*`beat.version`*:: ++ +-- +The version of the beat that generated this event. + + +-- + +*`@timestamp`*:: ++ +-- +type: date + +example: August 26th 2016, 12:35:53.332 + +format: date + +required: True + +The timestamp when the event log record was generated. + + +-- + +*`tags`*:: ++ +-- +Arbitrary tags that can be set per Beat and per transaction type. + + +-- + +*`fields`*:: ++ +-- +type: object + +Contains user configurable fields. + + +-- + +[float] +== error fields + +Error fields containing additional info in case of errors. + + + +*`error.message`*:: ++ +-- +type: text + +Error message. + + +-- + +*`error.code`*:: ++ +-- +type: long + +Error code. + + +-- + +*`error.type`*:: ++ +-- +type: keyword + +Error type. + + +-- + +[[exported-fields-cloud]] +== Cloud provider metadata fields + +Metadata from cloud providers added by the add_cloud_metadata processor. + + + +*`meta.cloud.provider`*:: ++ +-- +example: ec2 + +Name of the cloud provider. Possible values are ec2, gce, or digitalocean. + + +-- + +*`meta.cloud.instance_id`*:: ++ +-- +Instance ID of the host machine. + + +-- + +*`meta.cloud.instance_name`*:: ++ +-- +Instance name of the host machine. + + +-- + +*`meta.cloud.machine_type`*:: ++ +-- +example: t2.medium + +Machine type of the host machine. + + +-- + +*`meta.cloud.availability_zone`*:: ++ +-- +example: us-east-1c + +Availability zone in which this host is running. + + +-- + +*`meta.cloud.project_id`*:: ++ +-- +example: project-x + +Name of the project in Google Cloud. + + +-- + +*`meta.cloud.region`*:: ++ +-- +Region in which this host is running. + + +-- + +[[exported-fields-common]] +== Common Journalbeat fields + +Contains common fields available in all event types. + + + +[float] +== coredump fields + +Fields used by systemd-coredump kernel helper. + + + +*`coredump.unit`*:: ++ +-- +type: keyword + +Annotations of messages containing coredumps from system units. + + +-- + +*`coredump.user_unit`*:: ++ +-- +type: keyword + +Annotations of messages containing coredumps from user units. + + +-- + +[float] +== object fields + +Fields to log on behalf of a different program. + + + +[float] +== audit fields + +Audit fields of event. + + + +*`object.audit.loginuid`*:: ++ +-- +type: long + +example: 1000 + +required: False + +The login UID of the object process. + + +-- + +*`object.audit.session`*:: ++ +-- +type: long + +example: 3 + +required: False + +The audit session of the object process. + + +-- + +*`object.cmd`*:: ++ +-- +type: keyword + +example: /lib/systemd/systemd --user + +required: False + +The command line of the process. + + +-- + +*`object.name`*:: ++ +-- +type: keyword + +example: /lib/systemd/systemd + +required: False + +Name of the executable. + + +-- + +*`object.executable`*:: ++ +-- +type: keyword + +example: /lib/systemd/systemd + +required: False + +Path to the the executable. + + +-- + +*`object.uid`*:: ++ +-- +type: long + +required: False + +UID of the object process. + + +-- + +*`object.gid`*:: ++ +-- +type: long + +required: False + +GID of the object process. + + +-- + +*`object.pid`*:: ++ +-- +type: long + +required: False + +PID of the object process. + + +-- + +[float] +== systemd fields + +Systemd fields of event. + + + +*`object.systemd.owner_uid`*:: ++ +-- +type: long + +required: False + +The UID of the owner. + + +-- + +*`object.systemd.session`*:: ++ +-- +type: keyword + +required: False + +The ID of the systemd session. + + +-- + +*`object.systemd.unit`*:: ++ +-- +type: keyword + +required: False + +The name of the systemd unit. + + +-- + +*`object.systemd.user_unit`*:: ++ +-- +type: keyword + +required: False + +The name of the systemd user unit. + + +-- + +[float] +== kernel fields + +Fields to log on behalf of a different program. + + + +*`kernel.device`*:: ++ +-- +type: keyword + +required: False + +The kernel device name. + + +-- + +*`kernel.subsystem`*:: ++ +-- +type: keyword + +required: False + +The kernel subsystem name. + + +-- + +*`kernel.device_symlinks`*:: ++ +-- +type: text + +required: False + +Additional symlink names pointing to the device node in /dev. + + +-- + +*`kernel.device_node_path`*:: ++ +-- +type: text + +required: False + +The device node path of this device in /dev. + + +-- + +*`kernel.device_name`*:: ++ +-- +type: text + +required: False + +The kernel device name as it shows up in the device tree below /sys. + + +-- + +[float] +== process fields + +Fields to log on behalf of a different program. + + + +[float] +== audit fields + +Audit fields of event. + + + +*`process.audit.loginuid`*:: ++ +-- +type: long + +example: 1000 + +required: False + +The login UID of the source process. + + +-- + +*`process.audit.session`*:: ++ +-- +type: long + +example: 3 + +required: False + +The audit session of the source process. + + +-- + +*`process.cmd`*:: ++ +-- +type: keyword + +example: /lib/systemd/systemd --user + +required: False + +The command line of the process. + + +-- + +*`process.name`*:: ++ +-- +type: keyword + +example: /lib/systemd/systemd + +required: False + +Name of the executable. + + +-- + +*`process.executable`*:: ++ +-- +type: keyword + +example: /lib/systemd/systemd + +required: False + +Path to the the executable. + + +-- + +*`process.pid`*:: ++ +-- +type: long + +example: 1 + +required: False + +The ID of the process which logged the message. + + +-- + +*`process.gid`*:: ++ +-- +type: long + +example: 1 + +required: False + +The ID of the group which runs the process. + + +-- + +*`process.uid`*:: ++ +-- +type: long + +example: 1 + +required: False + +The ID of the user which runs the process. + + +-- + +[float] +== systemd fields + +Fields of systemd. + + + +*`systemd.invocation_id`*:: ++ +-- +type: keyword + +example: 8450f1672de646c88cd133aadd4f2d70 + +required: False + +The invocation ID for the runtime cycle of the unit the message was generated in. + + +-- + +*`systemd.cgroup`*:: ++ +-- +type: keyword + +example: /user.slice/user-1234.slice/session-2.scope + +required: False + +The control group path in the systemd hierarchy. + + +-- + +*`systemd.owner_uid`*:: ++ +-- +type: long + +required: False + +The owner UID of the systemd user unit or systemd session. + + +-- + +*`systemd.session`*:: ++ +-- +type: keyword + +required: False + +The ID of the systemd session. + + +-- + +*`systemd.slice`*:: ++ +-- +type: keyword + +example: user-1234.slice + +required: False + +The systemd slice unit. + + +-- + +*`systemd.unit`*:: ++ +-- +type: keyword + +example: nginx.service + +required: False + +The name of the systemd unit. + + +-- + +*`systemd.user_unit`*:: ++ +-- +type: keyword + +example: user-1234.slice + +required: False + +The name of the systemd user unit. + + +-- + +*`systemd.transport`*:: ++ +-- +type: keyword + +example: syslog + +required: True + +How the log message was received by journald. + + +-- + +[float] +== host fields + +Fields of the host. + + + +*`host.boot_id`*:: ++ +-- +type: text + +example: dd8c974asdf01dbe2ef26d7fasdf264c9 + +required: False + +The boot ID for the boot the log was generated in. + + +-- + +[float] +== code fields + +Fields of the code generating the event. + + + +*`code.file`*:: ++ +-- +type: text + +example: ../src/core/manager.c + +required: False + +The name of the source file where the log is generated. + + +-- + +*`code.function`*:: ++ +-- +type: text + +example: job_log_status_message + +required: False + +The name of the function which generated the log message. + + +-- + +*`code.line`*:: ++ +-- +type: long + +example: 123 + +required: False + +The line number of the code which generated the log message. + + +-- + +[float] +== syslog fields + +Fields of the code generating the event. + + + +*`syslog.priority`*:: ++ +-- +type: long + +example: 1 + +required: False + +The priority of the message. A syslog compatibility field. + + +-- + +*`syslog.facility`*:: ++ +-- +type: long + +example: 1 + +required: False + +The facility of the message. A syslog compatibility field. + + +-- + +*`syslog.identifier`*:: ++ +-- +type: text + +example: su + +required: False + +The identifier of the message. A syslog compatibility field. + + +-- + +*`message`*:: ++ +-- +type: text + +required: True + +The logged message. + + +-- + +[[exported-fields-docker-processor]] +== Docker fields + +Docker stats collected from Docker. + + + + +*`docker.container.id`*:: ++ +-- +type: keyword + +Unique container id. + + +-- + +*`docker.container.image`*:: ++ +-- +type: keyword + +Name of the image the container was built on. + + +-- + +*`docker.container.name`*:: ++ +-- +type: keyword + +Container name. + + +-- + +*`docker.container.labels`*:: ++ +-- +type: object + +Image labels. + + +-- + +[[exported-fields-host-processor]] +== Host fields + +Info collected for the host machine. + + + + +*`host.name`*:: ++ +-- +type: keyword + +Hostname. + + +-- + +*`host.id`*:: ++ +-- +type: keyword + +Unique host id. + + +-- + +*`host.architecture`*:: ++ +-- +type: keyword + +Host architecture (e.g. x86_64, arm, ppc, mips). + + +-- + +*`host.os.platform`*:: ++ +-- +type: keyword + +OS platform (e.g. centos, ubuntu, windows). + + +-- + +*`host.os.version`*:: ++ +-- +type: keyword + +OS version. + + +-- + +*`host.os.family`*:: ++ +-- +type: keyword + +OS family (e.g. redhat, debian, freebsd, windows). + + +-- + +*`host.os.kernel`*:: ++ +-- +type: keyword + +The operating system's kernel version. + + +-- + +*`host.ip`*:: ++ +-- +type: ip + +List of IP-addresses. + + +-- + +*`host.mac`*:: ++ +-- +type: keyword + +List of hardware-addresses, usually MAC-addresses. + + +-- + +[[exported-fields-kubernetes-processor]] +== Kubernetes fields + +Kubernetes metadata added by the kubernetes processor + + + + +*`kubernetes.pod.name`*:: ++ +-- +type: keyword + +Kubernetes pod name + + +-- + +*`kubernetes.pod.uid`*:: ++ +-- +type: keyword + +Kubernetes Pod UID + + +-- + +*`kubernetes.namespace`*:: ++ +-- +type: keyword + +Kubernetes namespace + + +-- + +*`kubernetes.node.name`*:: ++ +-- +type: keyword + +Kubernetes node name + + +-- + +*`kubernetes.labels`*:: ++ +-- +type: object + +Kubernetes labels map + + +-- + +*`kubernetes.annotations`*:: ++ +-- +type: object + +Kubernetes annotations map + + +-- + +*`kubernetes.container.name`*:: ++ +-- +type: keyword + +Kubernetes container name + + +-- + +*`kubernetes.container.image`*:: ++ +-- +type: keyword + +Kubernetes container image + + +-- + diff --git a/journalbeat/docs/index.asciidoc b/journalbeat/docs/index.asciidoc new file mode 100644 index 000000000000..348b79e498ee --- /dev/null +++ b/journalbeat/docs/index.asciidoc @@ -0,0 +1,5 @@ += Journalbeat Docs + +Welcome to the Journalbeat documentation. + + diff --git a/journalbeat/include/fields.go b/journalbeat/include/fields.go new file mode 100644 index 000000000000..0a8db9a3bddd --- /dev/null +++ b/journalbeat/include/fields.go @@ -0,0 +1,35 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +// Code generated by beats/dev-tools/cmd/asset/asset.go - DO NOT EDIT. + +package include + +import ( + "github.com/elastic/beats/libbeat/asset" +) + +func init() { + if err := asset.SetFields("journalbeat", "fields.yml", Asset); err != nil { + panic(err) + } +} + +// Asset returns asset data +func Asset() string { + return "eJzsW11z27rRvvev2PHN+3ZGom3ZcXJ80ambtCdumzbTk3OtAwFLCjEJ8ACgbfXXdwCCJCiBEvXhZDqTXMQmAe4+u9hPAJ7CI67ugMqikOIMwHCT4x2cv3cv4G+yUoLkCyTm/AyAoaaKl4ZLcQd/PAMAeC+FIVxoTwJSjjnTQJ4Iz8kiR+ACSJ4DPqEwYFYl6uQM/LQ7R2IKghRoQShkVVG6l+Dm3kGmZNW8ibCv//21ZlppZLBYgV5pgwWbNgThEZXAHJaYl6gS/2EIIYRRCW4C2jWMR1w9S8WC94NgAO6FkIbYEQ0yhQK1JhlaFTldcZG1smpIlSw8YsdaJ5uINKr594BlGYegGkBy8RWpOXidjIRcZiAFLHBJ8tSiIcB4mqKyVlIqmSlS7FooUrGISkIgOxQCcG9JNDYr09pKk2DSOu+Qfy4zLirO1mjWMHIpsrUBhb9XXCG7g5TkGtdG8YUUpfW9q8vLy7WxrTIAfFliDQZ+ffhgxTBL9GtkdUlR6yQqgUatufP7UwpwvT96t5QNnF0StPGiYCMcYhh0C/j8IueLCx81mp8wnVrzPx9nSlYIGwGJYJBzgY0Mg+Dt/6+IfiTsf5KihYovSCtjY/Ym2m7sKMxbkHwmZmkDg0UyhGa80G3k5JsmsmbYB6Ed4WcNhOyVIPw8HkL5ShA+j4fgV+i4aP2L98wD47V8FjaPnipgjwhroZlY5vtH4U3vOgmwDlcT7TyIOMK12uPV4YkgLDUALYYBdJHy6PtAbMqlfrVUV5/fu1pi+MTpq4VvqxJfZdeMHNdIHKgWtba+AZCW1wCWGuhcr4qci0e9gcjgizkWzj1j3L4nOXg+jrmGUnJhbMntc16jNclcy3TB8GkQsZ00L4lZvgrkL2toLKPa0rluBnYijFU3pwK3aWdANNj6cSmfNVSlhRfo1ChEWGAun8FWCn3n9Bnre3vnj14mWOCNXkbLStFIMQ3/M73MkAQ/epnd6H/0MsdX8Z1HjreBzv/8ysPzktOldc8MmRvwe0gn7nqOA+uCpoeqKqG32+5RHeJxQF25uA1nvHnaP0fJtCGxKw1x8SSp2yKcR/RymEu/u3lzmV7dvp0xvL25pe/eUXZ1fU0IYzfpjL293CMqdfCsGlOpnM5UJQwvEOiK5m0EsGV4aKLwTDRkKFARgwy4iMTg9XR7VCCzq5vonFN0v06vZtc3/tnnhuks0VSWuFdYFkbJ3Nu4q8x8rdNE+iVHRRRdrjbli/W/p9sM+NI0ub3Evd4ZgVTDLedwKj95izCiAW7R5Mf2TZ1VrFnCHivfwrTfrTXDe55cjAEqMi5eEo3qaT+Yuzv3Qw41Xle1I1r5ELhRROhSqv2AG1XFceuVzmU2Eu5H+exg2pYjDGwKKfKn+vTra31mx/o5ZCn14Qc2XjmWxq4MspDSxHLH6Oav0wxj7+hPb2+IZunlFVvgDNPZLXub2hez2xv60x5rbGGFOcM9N5qMp4buWJLhsZqzNBoWrum3VWHYnw2pM+WRMvUAXSbJhVb0gkqFFwURJEOV0EN9pO5lLDR4XqLCVpE80OOm46SVoCYW1w+Q56tczHOZzbUhptJz7wsHCtQA88VYZwprrrYpkm2PTlE+zq7HI3ctmaiKBaqeeY0CH5SUucy+m1mXikvFzerblt4N1wZ+oxm49/qwbW9JDF/w3E5z6CN2TKgb/7bgG65HgucMheEpR3UKN9TVPtV7y/oAGRr8/pOzAeDRhDuAy2812V629ZEzfyFmgcR012H+XD+NuP5iv9v3DkxPPksgCTZUtmAPQ5hFCBoFa9ww8HydwEMwy33Gu+1ujaZpIqgUKc8qVbdYNsBP7Hs7SAw8kbyyX7p7No4mN/ZRSBMSm7S1gufk53+RjlUPx8SOuVe/2cffWjrSSTyMK9lUWsNxt+JabK5yMpUSdeXkzunKJpj5s4M2LbTAA92pSggusgga25H+R4oRaJqZr4nmCVXQUW0B4yc2ZuXM2S1+mFe4bqJ8j9P5n6wo2pCiPO/5JyMGt/lnKlVBTG9eG2Luq6zSBma3Zgmzy6vbCVzN7q7f3L25Tq6vZ+O06yDZUkV0Gco5iEIqFevXf2tCGZLp7Vzu1YIbRdTKza21RYkNBc7eS1T1QhHB3IPrH0i/ErJ6WmNcR4eeHnvXruqHeazzGADaxirX3LQ+ZQNUzWwNASol1b4lwl/sR00EDG6Wke4gjItUWs+mRLv45fjoXSVDP/DDUNbakoNqaIO1XFDnw1BK30ndEom0jKtyzA7CTuq1mTR3NnNZsS5HvbePUCr5xBlaMQ1hxJB42vrkR+trfrT3qbZr1YUgwtjcTZg3JJtNSqkGs5idmrivkobsumMj3eG94X5+H2ECn6XW3Bquy0kaiEJLcAIZxQlIBYxn3JBcUiQiGcTGhTZEUOwa1gEsD35isG9kkwgUhC65WHfdGIfdmanlEeb1cVz8hHlgZ62ezSwpkPGq2M79U03Cmdh+zH2Z4yq2eZDyWgSVniLRZnpFdwTSgBC4jMi7bMd1DYfrLs1tMTkXG9tVbaH4kenLeNPzn1gsP0uZ5Vh72jB3hdnOVPtvN2eXfN7RmaSPzn+8p39oniPE6zGwbbENv3mO1OZs5+b1mPVZvZTKzOsM0JX3RNClVA2/aevlA9e1W1gQzQ9DcdznBFTJqBOGbZfwBP+9wo4g8Ei/E7ArYuljL46hXThyTXXqAdhCYlHx3EBsI7mDMvLAdAuS9y3P+N2WjldOFphvXm7p1RKwvZ7YgeXBaaLm0xqt32/0JvuxfooQebDFQGCofn+uH3o627Tvd1rm1r3OIbs8fk0++rYi1nSfxNLrABExcqLokhukplInkKFHDv4fkyyBl3e389ubCRBVTKAs6QQKXuo/RA6ZdFLmxNiS/jgk//oFGkIeA0VhpJ5AtaiEqSbwzAWTzwMg+h3P4Rg8nSiPlBQ839wG2pdFTcYLqZAtiZkAwwUnYgKpQlxotkPa3hXDA5F8ifSb/6eb21aDeuCbJ6Z85J2lf3BtbDh9+DwljCnUGiMn9AWhxwnWsFkSxZ6Jwo7ZBCpdkTxfwaf79yGGJoo9VgsrvkHdxbK/h+8ibLvxtgjvV9QdUQgj2fak3H20M/z1QMNeQbCU7ATJKdBAKVkdWaOsYmfRh3L6LBn8+vBhk5G7b1mSUWe441h1FDeZ2f7vpBp0dzDjKhyb2scxqqlBQcpNTqT707FTsQtIxnmeslwK+NJe5bSN7QkKxijfmu5/AwAA//8V2TBe" +} diff --git a/journalbeat/input/config.go b/journalbeat/input/config.go new file mode 100644 index 000000000000..bc38217696fd --- /dev/null +++ b/journalbeat/input/config.go @@ -0,0 +1,64 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package input + +import ( + "fmt" + "time" +) + +// Config stores the options of an input. +type Config struct { + // Paths stores the paths to the journal files to be read. + Paths []string `config:"paths"` + // MaxBackoff is the limit of the backoff time. + Backoff time.Duration `config:"backoff" validate:"min=0,nonzero"` + // Backoff is the current interval to wait before + // attemting to read again from the journal. + BackoffFactor int `config:"backoff_factor" validate:"min=1"` + // BackoffFactor is the multiplier of Backoff. + MaxBackoff time.Duration `config:"max_backoff" validate:"min=0,nonzero"` + // Seek is the method to read from journals. + Seek string `config:"seek"` +} + +var ( + // DefaultConfig is the defaults for an inputs + DefaultConfig = Config{ + Backoff: 1 * time.Second, + BackoffFactor: 2, + MaxBackoff: 60 * time.Second, + Seek: "tail", + } +) + +// Validate check the configuration of the input. +func (c *Config) Validate() error { + correctSeek := false + for _, s := range []string{"cursor", "head", "tail"} { + if c.Seek == s { + correctSeek = true + } + } + + if !correctSeek { + return fmt.Errorf("incorrect value for seek: %s. possible values: cursor, head, tail", c.Seek) + } + + return nil +} diff --git a/journalbeat/input/input.go b/journalbeat/input/input.go new file mode 100644 index 000000000000..fd0255c531b6 --- /dev/null +++ b/journalbeat/input/input.go @@ -0,0 +1,179 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package input + +import ( + "fmt" + "sync" + + uuid "github.com/satori/go.uuid" + + "github.com/elastic/beats/journalbeat/checkpoint" + "github.com/elastic/beats/journalbeat/reader" + "github.com/elastic/beats/libbeat/beat" + "github.com/elastic/beats/libbeat/common" + "github.com/elastic/beats/libbeat/logp" +) + +// Input manages readers and forwards entries from journals. +type Input struct { + readers []*reader.Reader + done chan struct{} + config Config + pipeline beat.Pipeline + states map[string]checkpoint.JournalState + id uuid.UUID + logger *logp.Logger +} + +// New returns a new Inout +func New( + c *common.Config, + pipeline beat.Pipeline, + done chan struct{}, + states map[string]checkpoint.JournalState, +) (*Input, error) { + config := DefaultConfig + if err := c.Unpack(&config); err != nil { + return nil, err + } + + id := uuid.NewV4() + logger := logp.NewLogger("input").With("id", id) + + var readers []*reader.Reader + if len(config.Paths) == 0 { + cfg := reader.Config{ + Path: reader.LocalSystemJournalID, // used to identify the state in the registry + Backoff: config.Backoff, + MaxBackoff: config.MaxBackoff, + BackoffFactor: config.BackoffFactor, + Seek: config.Seek, + } + + state := states[reader.LocalSystemJournalID] + r, err := reader.NewLocal(cfg, done, state, logger) + if err != nil { + return nil, fmt.Errorf("error creating reader for local journal: %v", err) + } + readers = append(readers, r) + } + + for _, p := range config.Paths { + cfg := reader.Config{ + Path: p, + Backoff: config.Backoff, + MaxBackoff: config.MaxBackoff, + BackoffFactor: config.BackoffFactor, + Seek: config.Seek, + } + state := states[p] + r, err := reader.New(cfg, done, state, logger) + if err != nil { + return nil, fmt.Errorf("error creating reader for journal: %v", err) + } + readers = append(readers, r) + } + + logger.Debugf("New input is created for paths %v", config.Paths) + + return &Input{ + readers: readers, + done: done, + config: config, + pipeline: pipeline, + states: states, + id: id, + logger: logger, + }, nil +} + +// Run connects to the output, collects entries from the readers +// and then publishes the events. +func (i *Input) Run() { + client, err := i.pipeline.ConnectWith(beat.ClientConfig{ + PublishMode: beat.GuaranteedSend, + EventMetadata: common.EventMetadata{}, + Meta: nil, + Processor: nil, + ACKCount: func(n int) { + i.logger.Infof("journalbeat successfully published %d events", n) + }, + }) + if err != nil { + i.logger.Error("Error connecting to output: %v", err) + return + } + defer client.Close() + + i.publishAll(client) +} + +func (i *Input) publishAll(client beat.Client) { + out := make(chan *beat.Event) + defer close(out) + + var wg sync.WaitGroup + merge := func(in chan *beat.Event) { + wg.Add(1) + + go func(c chan *beat.Event) { + defer wg.Done() + for { + select { + case <-i.done: + return + case v, ok := <-c: + if !ok { + return + } + out <- v + } + } + }(in) + } + + // merge channels of readers into a single output channel + for _, r := range i.readers { + c := r.Follow() + merge(c) + } + +loop: + for { + select { + case <-i.done: + break loop + case e := <-out: + client.Publish(*e) + } + } + wg.Wait() +} + +// Stop stops all readers of the input. +func (i *Input) Stop() { + for _, r := range i.readers { + r.Close() + } +} + +// Wait waits until all readers are done. +func (i *Input) Wait() { + i.Stop() +} diff --git a/journalbeat/journalbeat.reference.yml b/journalbeat/journalbeat.reference.yml new file mode 100644 index 000000000000..718208cb8119 --- /dev/null +++ b/journalbeat/journalbeat.reference.yml @@ -0,0 +1,1155 @@ +###################### Journalbeat Configuration Example ######################### + +# This file is an example configuration file highlighting only the most common +# options. The journalbeat.reference.yml file from the same directory contains all the +# supported options with more comments. You can use it as a reference. +# +# You can find the full configuration reference here: +# https://www.elastic.co/guide/en/beats/journalbeat/index.html + +# For more available modules and options, please see the journalbeat.reference.yml sample +# configuration file. + +#=========================== Journalbeat inputs ============================= + +journalbeat.inputs: + # Paths that should be crawled and fetched. Possible values files and directories. + # When setting a directory, all journals under it are merged. + # When empty starts to read from local journal. +- paths: [] + + # The number of seconds to wait before trying to read again from journals. + #backoff: 1s + # Multiplier of backoff value. + #backoff_factor: 2 + # The maximum number of seconds to wait before attempting to read again from journals. + #max_backoff: 60s + + # Position to start reading from journal. Valid values: head, tail, cursor + seek: tail + +#========================= Journalbeat global options ============================ +#journalbeat: + # Name of the registry file. If a relative path is used, it is considered relative to the + # data path. + #registry_file: registry + + # The number of seconds to wait before trying to read again from journals. + #backoff: 1s + # Multiplier of backoff value. + #backoff_factor: 2 + # The maximum number of seconds to wait before attempting to read again from journals. + #max_backoff: 60s + + # Position to start reading from all journal. Possible values: head, tail, cursor + #seek: head + +#================================ General ====================================== + +# The name of the shipper that publishes the network data. It can be used to group +# all the transactions sent by a single shipper in the web interface. +# If this options is not defined, the hostname is used. +#name: + +# The tags of the shipper are included in their own field with each +# transaction published. Tags make it easy to group servers by different +# logical properties. +#tags: ["service-X", "web-tier"] + +# Optional fields that you can specify to add additional information to the +# output. Fields can be scalar values, arrays, dictionaries, or any nested +# combination of these. +#fields: +# env: staging + +# If this option is set to true, the custom fields are stored as top-level +# fields in the output document instead of being grouped under a fields +# sub-dictionary. Default is false. +#fields_under_root: false + +# Internal queue configuration for buffering events to be published. +#queue: + # Queue type by name (default 'mem') + # The memory queue will present all available events (up to the outputs + # bulk_max_size) to the output, the moment the output is ready to server + # another batch of events. + #mem: + # Max number of events the queue can buffer. + #events: 4096 + + # Hints the minimum number of events stored in the queue, + # before providing a batch of events to the outputs. + # The default value is set to 2048. + # A value of 0 ensures events are immediately available + # to be sent to the outputs. + #flush.min_events: 2048 + + # Maximum duration after which events are available to the outputs, + # if the number of events stored in the queue is < min_flush_events. + #flush.timeout: 1s + + # The spool queue will store events in a local spool file, before + # forwarding the events to the outputs. + # + # Beta: spooling to disk is currently a beta feature. Use with care. + # + # The spool file is a circular buffer, which blocks once the file/buffer is full. + # Events are put into a write buffer and flushed once the write buffer + # is full or the flush_timeout is triggered. + # Once ACKed by the output, events are removed immediately from the queue, + # making space for new events to be persisted. + #spool: + # The file namespace configures the file path and the file creation settings. + # Once the file exists, the `size`, `page_size` and `prealloc` settings + # will have no more effect. + #file: + # Location of spool file. The default value is ${path.data}/spool.dat. + #path: "${path.data}/spool.dat" + + # Configure file permissions if file is created. The default value is 0600. + #permissions: 0600 + + # File size hint. The spool blocks, once this limit is reached. The default value is 100 MiB. + #size: 100MiB + + # The files page size. A file is split into multiple pages of the same size. The default value is 4KiB. + #page_size: 4KiB + + # If prealloc is set, the required space for the file is reserved using + # truncate. The default value is true. + #prealloc: true + + # Spool writer settings + # Events are serialized into a write buffer. The write buffer is flushed if: + # - The buffer limit has been reached. + # - The configured limit of buffered events is reached. + # - The flush timeout is triggered. + #write: + # Sets the write buffer size. + #buffer_size: 1MiB + + # Maximum duration after which events are flushed, if the write buffer + # is not full yet. The default value is 1s. + #flush.timeout: 1s + + # Number of maximum buffered events. The write buffer is flushed once the + # limit is reached. + #flush.events: 16384 + + # Configure the on-disk event encoding. The encoding can be changed + # between restarts. + # Valid encodings are: json, ubjson, and cbor. + #codec: cbor + #read: + # Reader flush timeout, waiting for more events to become available, so + # to fill a complete batch, as required by the outputs. + # If flush_timeout is 0, all available events are forwarded to the + # outputs immediately. + # The default value is 0s. + #flush.timeout: 0s + +# Sets the maximum number of CPUs that can be executing simultaneously. The +# default is the number of logical CPUs available in the system. +#max_procs: + +#================================ Processors =================================== + +# Processors are used to reduce the number of fields in the exported event or to +# enhance the event with external metadata. This section defines a list of +# processors that are applied one by one and the first one receives the initial +# event: +# +# event -> filter1 -> event1 -> filter2 ->event2 ... +# +# The supported processors are drop_fields, drop_event, include_fields, and +# add_cloud_metadata. +# +# For example, you can use the following processors to keep the fields that +# contain CPU load percentages, but remove the fields that contain CPU ticks +# values: +# +#processors: +#- include_fields: +# fields: ["cpu"] +#- drop_fields: +# fields: ["cpu.user", "cpu.system"] +# +# The following example drops the events that have the HTTP response code 200: +# +#processors: +#- drop_event: +# when: +# equals: +# http.code: 200 +# +# The following example renames the field a to b: +# +#processors: +#- rename: +# fields: +# - from: "a" +# to: "b" +# +# The following example tokenizes the string into fields: +# +#processors: +#- dissect: +# tokenizer: "%{key1} - %{key2}" +# field: "message" +# target_prefix: "dissect" +# +# The following example enriches each event with metadata from the cloud +# provider about the host machine. It works on EC2, GCE, DigitalOcean, +# Tencent Cloud, and Alibaba Cloud. +# +#processors: +#- add_cloud_metadata: ~ +# +# The following example enriches each event with the machine's local time zone +# offset from UTC. +# +#processors: +#- add_locale: +# format: offset +# +# The following example enriches each event with docker metadata, it matches +# given fields to an existing container id and adds info from that container: +# +#processors: +#- add_docker_metadata: +# host: "unix:///var/run/docker.sock" +# match_fields: ["system.process.cgroup.id"] +# match_pids: ["process.pid", "process.ppid"] +# match_source: true +# match_source_index: 4 +# match_short_id: false +# cleanup_timeout: 60 +# # To connect to Docker over TLS you must specify a client and CA certificate. +# #ssl: +# # certificate_authority: "/etc/pki/root/ca.pem" +# # certificate: "/etc/pki/client/cert.pem" +# # key: "/etc/pki/client/cert.key" +# +# The following example enriches each event with docker metadata, it matches +# container id from log path available in `source` field (by default it expects +# it to be /var/lib/docker/containers/*/*.log). +# +#processors: +#- add_docker_metadata: ~ +# +# The following example enriches each event with host metadata. +# +#processors: +#- add_host_metadata: +# netinfo.enabled: false +# + +#============================= Elastic Cloud ================================== + +# These settings simplify using journalbeat with the Elastic Cloud (https://cloud.elastic.co/). + +# The cloud.id setting overwrites the `output.elasticsearch.hosts` and +# `setup.kibana.host` options. +# You can find the `cloud.id` in the Elastic Cloud web UI. +#cloud.id: + +# The cloud.auth setting overwrites the `output.elasticsearch.username` and +# `output.elasticsearch.password` settings. The format is `:`. +#cloud.auth: + +#================================ Outputs ====================================== + +# Configure what output to use when sending the data collected by the beat. + +#-------------------------- Elasticsearch output ------------------------------- +output.elasticsearch: + # Boolean flag to enable or disable the output module. + #enabled: true + + # Array of hosts to connect to. + # Scheme and port can be left out and will be set to the default (http and 9200) + # In case you specify and additional path, the scheme is required: http://localhost:9200/path + # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200 + hosts: ["localhost:9200"] + + # Set gzip compression level. + #compression_level: 0 + + # Configure escaping html symbols in strings. + #escape_html: true + + # Optional protocol and basic auth credentials. + #protocol: "https" + #username: "elastic" + #password: "changeme" + + # Dictionary of HTTP parameters to pass within the url with index operations. + #parameters: + #param1: value1 + #param2: value2 + + # Number of workers per Elasticsearch host. + #worker: 1 + + # Optional index name. The default is "journalbeat" plus date + # and generates [journalbeat-]YYYY.MM.DD keys. + # In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly. + #index: "journalbeat-%{[beat.version]}-%{+yyyy.MM.dd}" + + # Optional ingest node pipeline. By default no pipeline will be used. + #pipeline: "" + + # Optional HTTP Path + #path: "/elasticsearch" + + # Custom HTTP headers to add to each request + #headers: + # X-My-Header: Contents of the header + + # Proxy server url + #proxy_url: http://proxy:3128 + + # The number of times a particular Elasticsearch index operation is attempted. If + # the indexing operation doesn't succeed after this many retries, the events are + # dropped. The default is 3. + #max_retries: 3 + + # The maximum number of events to bulk in a single Elasticsearch bulk API index request. + # The default is 50. + #bulk_max_size: 50 + + # The number of seconds to wait before trying to reconnect to Elasticsearch + # after a network error. After waiting backoff.init seconds, the Beat + # tries to reconnect. If the attempt fails, the backoff timer is increased + # exponentially up to backoff.max. After a successful connection, the backoff + # timer is reset. The default is 1s. + #backoff.init: 1s + + # The maximum number of seconds to wait before attempting to connect to + # Elasticsearch after a network error. The default is 60s. + #backoff.max: 60s + + # Configure http request timeout before failing a request to Elasticsearch. + #timeout: 90 + + # Use SSL settings for HTTPS. + #ssl.enabled: true + + # Configure SSL verification mode. If `none` is configured, all server hosts + # and certificates will be accepted. In this mode, SSL based connections are + # susceptible to man-in-the-middle attacks. Use only for testing. Default is + # `full`. + #ssl.verification_mode: full + + # List of supported/valid TLS versions. By default all TLS versions 1.0 up to + # 1.2 are enabled. + #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] + + # SSL configuration. By default is off. + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client Certificate Key + #ssl.key: "/etc/pki/client/cert.key" + + # Optional passphrase for decrypting the Certificate Key. + #ssl.key_passphrase: '' + + # Configure cipher suites to be used for SSL connections + #ssl.cipher_suites: [] + + # Configure curve types for ECDHE based cipher suites + #ssl.curve_types: [] + + # Configure what types of renegotiation are supported. Valid options are + # never, once, and freely. Default is never. + #ssl.renegotiation: never + + +#----------------------------- Logstash output --------------------------------- +#output.logstash: + # Boolean flag to enable or disable the output module. + #enabled: true + + # The Logstash hosts + #hosts: ["localhost:5044"] + + # Number of workers per Logstash host. + #worker: 1 + + # Set gzip compression level. + #compression_level: 3 + + # Configure escaping html symbols in strings. + #escape_html: true + + # Optional maximum time to live for a connection to Logstash, after which the + # connection will be re-established. A value of `0s` (the default) will + # disable this feature. + # + # Not yet supported for async connections (i.e. with the "pipelining" option set) + #ttl: 30s + + # Optional load balance the events between the Logstash hosts. Default is false. + #loadbalance: false + + # Number of batches to be sent asynchronously to Logstash while processing + # new batches. + #pipelining: 2 + + # If enabled only a subset of events in a batch of events is transferred per + # transaction. The number of events to be sent increases up to `bulk_max_size` + # if no error is encountered. + #slow_start: false + + # The number of seconds to wait before trying to reconnect to Logstash + # after a network error. After waiting backoff.init seconds, the Beat + # tries to reconnect. If the attempt fails, the backoff timer is increased + # exponentially up to backoff.max. After a successful connection, the backoff + # timer is reset. The default is 1s. + #backoff.init: 1s + + # The maximum number of seconds to wait before attempting to connect to + # Logstash after a network error. The default is 60s. + #backoff.max: 60s + + # Optional index name. The default index name is set to journalbeat + # in all lowercase. + #index: 'journalbeat' + + # SOCKS5 proxy server URL + #proxy_url: socks5://user:password@socks5-server:2233 + + # Resolve names locally when using a proxy server. Defaults to false. + #proxy_use_local_resolver: false + + # Enable SSL support. SSL is automatically enabled, if any SSL setting is set. + #ssl.enabled: true + + # Configure SSL verification mode. If `none` is configured, all server hosts + # and certificates will be accepted. In this mode, SSL based connections are + # susceptible to man-in-the-middle attacks. Use only for testing. Default is + # `full`. + #ssl.verification_mode: full + + # List of supported/valid TLS versions. By default all TLS versions 1.0 up to + # 1.2 are enabled. + #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] + + # Optional SSL configuration options. SSL is off by default. + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client Certificate Key + #ssl.key: "/etc/pki/client/cert.key" + + # Optional passphrase for decrypting the Certificate Key. + #ssl.key_passphrase: '' + + # Configure cipher suites to be used for SSL connections + #ssl.cipher_suites: [] + + # Configure curve types for ECDHE based cipher suites + #ssl.curve_types: [] + + # Configure what types of renegotiation are supported. Valid options are + # never, once, and freely. Default is never. + #ssl.renegotiation: never + + # The number of times to retry publishing an event after a publishing failure. + # After the specified number of retries, the events are typically dropped. + # Some Beats, such as Filebeat and Winlogbeat, ignore the max_retries setting + # and retry until all events are published. Set max_retries to a value less + # than 0 to retry until all events are published. The default is 3. + #max_retries: 3 + + # The maximum number of events to bulk in a single Logstash request. The + # default is 2048. + #bulk_max_size: 2048 + + # The number of seconds to wait for responses from the Logstash server before + # timing out. The default is 30s. + #timeout: 30s + +#------------------------------- Kafka output ---------------------------------- +#output.kafka: + # Boolean flag to enable or disable the output module. + #enabled: true + + # The list of Kafka broker addresses from where to fetch the cluster metadata. + # The cluster metadata contain the actual Kafka brokers events are published + # to. + #hosts: ["localhost:9092"] + + # The Kafka topic used for produced events. The setting can be a format string + # using any event field. To set the topic from document type use `%{[type]}`. + #topic: beats + + # The Kafka event key setting. Use format string to create unique event key. + # By default no event key will be generated. + #key: '' + + # The Kafka event partitioning strategy. Default hashing strategy is `hash` + # using the `output.kafka.key` setting or randomly distributes events if + # `output.kafka.key` is not configured. + #partition.hash: + # If enabled, events will only be published to partitions with reachable + # leaders. Default is false. + #reachable_only: false + + # Configure alternative event field names used to compute the hash value. + # If empty `output.kafka.key` setting will be used. + # Default value is empty list. + #hash: [] + + # Authentication details. Password is required if username is set. + #username: '' + #password: '' + + # Kafka version journalbeat is assumed to run against. Defaults to the "1.0.0". + #version: '1.0.0' + + # Configure JSON encoding + #codec.json: + # Pretty print json event + #pretty: false + + # Configure escaping html symbols in strings. + #escape_html: true + + # Metadata update configuration. Metadata do contain leader information + # deciding which broker to use when publishing. + #metadata: + # Max metadata request retry attempts when cluster is in middle of leader + # election. Defaults to 3 retries. + #retry.max: 3 + + # Waiting time between retries during leader elections. Default is 250ms. + #retry.backoff: 250ms + + # Refresh metadata interval. Defaults to every 10 minutes. + #refresh_frequency: 10m + + # The number of concurrent load-balanced Kafka output workers. + #worker: 1 + + # The number of times to retry publishing an event after a publishing failure. + # After the specified number of retries, the events are typically dropped. + # Some Beats, such as Filebeat, ignore the max_retries setting and retry until + # all events are published. Set max_retries to a value less than 0 to retry + # until all events are published. The default is 3. + #max_retries: 3 + + # The maximum number of events to bulk in a single Kafka request. The default + # is 2048. + #bulk_max_size: 2048 + + # The number of seconds to wait for responses from the Kafka brokers before + # timing out. The default is 30s. + #timeout: 30s + + # The maximum duration a broker will wait for number of required ACKs. The + # default is 10s. + #broker_timeout: 10s + + # The number of messages buffered for each Kafka broker. The default is 256. + #channel_buffer_size: 256 + + # The keep-alive period for an active network connection. If 0s, keep-alives + # are disabled. The default is 0 seconds. + #keep_alive: 0 + + # Sets the output compression codec. Must be one of none, snappy and gzip. The + # default is gzip. + #compression: gzip + + # Set the compression level. Currently only gzip provides a compression level + # between 0 and 9. The default value is chosen by the compression algorithm. + #compression_level: 4 + + # The maximum permitted size of JSON-encoded messages. Bigger messages will be + # dropped. The default value is 1000000 (bytes). This value should be equal to + # or less than the broker's message.max.bytes. + #max_message_bytes: 1000000 + + # The ACK reliability level required from broker. 0=no response, 1=wait for + # local commit, -1=wait for all replicas to commit. The default is 1. Note: + # If set to 0, no ACKs are returned by Kafka. Messages might be lost silently + # on error. + #required_acks: 1 + + # The configurable ClientID used for logging, debugging, and auditing + # purposes. The default is "beats". + #client_id: beats + + # Enable SSL support. SSL is automatically enabled, if any SSL setting is set. + #ssl.enabled: true + + # Optional SSL configuration options. SSL is off by default. + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Configure SSL verification mode. If `none` is configured, all server hosts + # and certificates will be accepted. In this mode, SSL based connections are + # susceptible to man-in-the-middle attacks. Use only for testing. Default is + # `full`. + #ssl.verification_mode: full + + # List of supported/valid TLS versions. By default all TLS versions 1.0 up to + # 1.2 are enabled. + #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client Certificate Key + #ssl.key: "/etc/pki/client/cert.key" + + # Optional passphrase for decrypting the Certificate Key. + #ssl.key_passphrase: '' + + # Configure cipher suites to be used for SSL connections + #ssl.cipher_suites: [] + + # Configure curve types for ECDHE based cipher suites + #ssl.curve_types: [] + + # Configure what types of renegotiation are supported. Valid options are + # never, once, and freely. Default is never. + #ssl.renegotiation: never + +#------------------------------- Redis output ---------------------------------- +#output.redis: + # Boolean flag to enable or disable the output module. + #enabled: true + + # Configure JSON encoding + #codec.json: + # Pretty print json event + #pretty: false + + # Configure escaping html symbols in strings. + #escape_html: true + + # The list of Redis servers to connect to. If load balancing is enabled, the + # events are distributed to the servers in the list. If one server becomes + # unreachable, the events are distributed to the reachable servers only. + #hosts: ["localhost:6379"] + + # The Redis port to use if hosts does not contain a port number. The default + # is 6379. + #port: 6379 + + # The name of the Redis list or channel the events are published to. The + # default is journalbeat. + #key: journalbeat + + # The password to authenticate with. The default is no authentication. + #password: + + # The Redis database number where the events are published. The default is 0. + #db: 0 + + # The Redis data type to use for publishing events. If the data type is list, + # the Redis RPUSH command is used. If the data type is channel, the Redis + # PUBLISH command is used. The default value is list. + #datatype: list + + # The number of workers to use for each host configured to publish events to + # Redis. Use this setting along with the loadbalance option. For example, if + # you have 2 hosts and 3 workers, in total 6 workers are started (3 for each + # host). + #worker: 1 + + # If set to true and multiple hosts or workers are configured, the output + # plugin load balances published events onto all Redis hosts. If set to false, + # the output plugin sends all events to only one host (determined at random) + # and will switch to another host if the currently selected one becomes + # unreachable. The default value is true. + #loadbalance: true + + # The Redis connection timeout in seconds. The default is 5 seconds. + #timeout: 5s + + # The number of times to retry publishing an event after a publishing failure. + # After the specified number of retries, the events are typically dropped. + # Some Beats, such as Filebeat, ignore the max_retries setting and retry until + # all events are published. Set max_retries to a value less than 0 to retry + # until all events are published. The default is 3. + #max_retries: 3 + + # The number of seconds to wait before trying to reconnect to Redis + # after a network error. After waiting backoff.init seconds, the Beat + # tries to reconnect. If the attempt fails, the backoff timer is increased + # exponentially up to backoff.max. After a successful connection, the backoff + # timer is reset. The default is 1s. + #backoff.init: 1s + + # The maximum number of seconds to wait before attempting to connect to + # Redis after a network error. The default is 60s. + #backoff.max: 60s + + # The maximum number of events to bulk in a single Redis request or pipeline. + # The default is 2048. + #bulk_max_size: 2048 + + # The URL of the SOCKS5 proxy to use when connecting to the Redis servers. The + # value must be a URL with a scheme of socks5://. + #proxy_url: + + # This option determines whether Redis hostnames are resolved locally when + # using a proxy. The default value is false, which means that name resolution + # occurs on the proxy server. + #proxy_use_local_resolver: false + + # Enable SSL support. SSL is automatically enabled, if any SSL setting is set. + #ssl.enabled: true + + # Configure SSL verification mode. If `none` is configured, all server hosts + # and certificates will be accepted. In this mode, SSL based connections are + # susceptible to man-in-the-middle attacks. Use only for testing. Default is + # `full`. + #ssl.verification_mode: full + + # List of supported/valid TLS versions. By default all TLS versions 1.0 up to + # 1.2 are enabled. + #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] + + # Optional SSL configuration options. SSL is off by default. + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client Certificate Key + #ssl.key: "/etc/pki/client/cert.key" + + # Optional passphrase for decrypting the Certificate Key. + #ssl.key_passphrase: '' + + # Configure cipher suites to be used for SSL connections + #ssl.cipher_suites: [] + + # Configure curve types for ECDHE based cipher suites + #ssl.curve_types: [] + + # Configure what types of renegotiation are supported. Valid options are + # never, once, and freely. Default is never. + #ssl.renegotiation: never + +#------------------------------- File output ----------------------------------- +#output.file: + # Boolean flag to enable or disable the output module. + #enabled: true + + # Configure JSON encoding + #codec.json: + # Pretty print json event + #pretty: false + + # Configure escaping html symbols in strings. + #escape_html: true + + # Path to the directory where to save the generated files. The option is + # mandatory. + #path: "/tmp/journalbeat" + + # Name of the generated files. The default is `journalbeat` and it generates + # files: `journalbeat`, `journalbeat.1`, `journalbeat.2`, etc. + #filename: journalbeat + + # Maximum size in kilobytes of each file. When this size is reached, and on + # every journalbeat restart, the files are rotated. The default value is 10240 + # kB. + #rotate_every_kb: 10000 + + # Maximum number of files under path. When this number of files is reached, + # the oldest file is deleted and the rest are shifted from last to first. The + # default is 7 files. + #number_of_files: 7 + + # Permissions to use for file creation. The default is 0600. + #permissions: 0600 + + +#----------------------------- Console output --------------------------------- +#output.console: + # Boolean flag to enable or disable the output module. + #enabled: true + + # Configure JSON encoding + #codec.json: + # Pretty print json event + #pretty: false + + # Configure escaping html symbols in strings. + #escape_html: true + +#================================= Paths ====================================== + +# The home path for the journalbeat installation. This is the default base path +# for all other path settings and for miscellaneous files that come with the +# distribution (for example, the sample dashboards). +# If not set by a CLI flag or in the configuration file, the default for the +# home path is the location of the binary. +#path.home: + +# The configuration path for the journalbeat installation. This is the default +# base path for configuration files, including the main YAML configuration file +# and the Elasticsearch template file. If not set by a CLI flag or in the +# configuration file, the default for the configuration path is the home path. +#path.config: ${path.home} + +# The data path for the journalbeat installation. This is the default base path +# for all the files in which journalbeat needs to store its data. If not set by a +# CLI flag or in the configuration file, the default for the data path is a data +# subdirectory inside the home path. +#path.data: ${path.home}/data + +# The logs path for a journalbeat installation. This is the default location for +# the Beat's log files. If not set by a CLI flag or in the configuration file, +# the default for the logs path is a logs subdirectory inside the home path. +#path.logs: ${path.home}/logs + +#================================ Keystore ========================================== +# Location of the Keystore containing the keys and their sensitive values. +#keystore.path: "${path.config}/beats.keystore" + +#============================== Dashboards ===================================== +# These settings control loading the sample dashboards to the Kibana index. Loading +# the dashboards are disabled by default and can be enabled either by setting the +# options here, or by using the `-setup` CLI flag or the `setup` command. +#setup.dashboards.enabled: false + +# The directory from where to read the dashboards. The default is the `kibana` +# folder in the home path. +#setup.dashboards.directory: ${path.home}/kibana + +# The URL from where to download the dashboards archive. It is used instead of +# the directory if it has a value. +#setup.dashboards.url: + +# The file archive (zip file) from where to read the dashboards. It is used instead +# of the directory when it has a value. +#setup.dashboards.file: + +# In case the archive contains the dashboards from multiple Beats, this lets you +# select which one to load. You can load all the dashboards in the archive by +# setting this to the empty string. +#setup.dashboards.beat: journalbeat + +# The name of the Kibana index to use for setting the configuration. Default is ".kibana" +#setup.dashboards.kibana_index: .kibana + +# The Elasticsearch index name. This overwrites the index name defined in the +# dashboards and index pattern. Example: testbeat-* +#setup.dashboards.index: + +# Always use the Kibana API for loading the dashboards instead of autodetecting +# how to install the dashboards by first querying Elasticsearch. +#setup.dashboards.always_kibana: false + +# If true and Kibana is not reachable at the time when dashboards are loaded, +# it will retry to reconnect to Kibana instead of exiting with an error. +#setup.dashboards.retry.enabled: false + +# Duration interval between Kibana connection retries. +#setup.dashboards.retry.interval: 1s + +# Maximum number of retries before exiting with an error, 0 for unlimited retrying. +#setup.dashboards.retry.maximum: 0 + + +#============================== Template ===================================== + +# A template is used to set the mapping in Elasticsearch +# By default template loading is enabled and the template is loaded. +# These settings can be adjusted to load your own template or overwrite existing ones. + +# Set to false to disable template loading. +#setup.template.enabled: true + +# Template name. By default the template name is "journalbeat-%{[beat.version]}" +# The template name and pattern has to be set in case the elasticsearch index pattern is modified. +#setup.template.name: "journalbeat-%{[beat.version]}" + +# Template pattern. By default the template pattern is "-%{[beat.version]}-*" to apply to the default index settings. +# The first part is the version of the beat and then -* is used to match all daily indices. +# The template name and pattern has to be set in case the elasticsearch index pattern is modified. +#setup.template.pattern: "journalbeat-%{[beat.version]}-*" + +# Path to fields.yml file to generate the template +#setup.template.fields: "${path.config}/fields.yml" + +# A list of fields to be added to the template and Kibana index pattern. Also +# specify setup.template.overwrite: true to overwrite the existing template. +# This setting is experimental. +#setup.template.append_fields: +#- name: field_name +# type: field_type + +# Enable json template loading. If this is enabled, the fields.yml is ignored. +#setup.template.json.enabled: false + +# Path to the json template file +#setup.template.json.path: "${path.config}/template.json" + +# Name under which the template is stored in Elasticsearch +#setup.template.json.name: "" + +# Overwrite existing template +#setup.template.overwrite: false + +# Elasticsearch template settings +setup.template.settings: + + # A dictionary of settings to place into the settings.index dictionary + # of the Elasticsearch template. For more details, please check + # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html + #index: + #number_of_shards: 1 + #codec: best_compression + #number_of_routing_shards: 30 + + # A dictionary of settings for the _source field. For more details, please check + # https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-source-field.html + #_source: + #enabled: false + +#============================== Kibana ===================================== + +# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API. +# This requires a Kibana endpoint configuration. +setup.kibana: + + # Kibana Host + # Scheme and port can be left out and will be set to the default (http and 5601) + # In case you specify and additional path, the scheme is required: http://localhost:5601/path + # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601 + #host: "localhost:5601" + + # Optional protocol and basic auth credentials. + #protocol: "https" + #username: "elastic" + #password: "changeme" + + # Optional HTTP Path + #path: "" + + # Use SSL settings for HTTPS. Default is true. + #ssl.enabled: true + + # Configure SSL verification mode. If `none` is configured, all server hosts + # and certificates will be accepted. In this mode, SSL based connections are + # susceptible to man-in-the-middle attacks. Use only for testing. Default is + # `full`. + #ssl.verification_mode: full + + # List of supported/valid TLS versions. By default all TLS versions 1.0 up to + # 1.2 are enabled. + #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] + + # SSL configuration. By default is off. + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client Certificate Key + #ssl.key: "/etc/pki/client/cert.key" + + # Optional passphrase for decrypting the Certificate Key. + #ssl.key_passphrase: '' + + # Configure cipher suites to be used for SSL connections + #ssl.cipher_suites: [] + + # Configure curve types for ECDHE based cipher suites + #ssl.curve_types: [] + + + +#================================ Logging ====================================== +# There are four options for the log output: file, stderr, syslog, eventlog +# The file output is the default. + +# Sets log level. The default log level is info. +# Available log levels are: error, warning, info, debug +#logging.level: info + +# Enable debug output for selected components. To enable all selectors use ["*"] +# Other available selectors are "beat", "publish", "service" +# Multiple selectors can be chained. +#logging.selectors: [ ] + +# Send all logging output to syslog. The default is false. +#logging.to_syslog: false + +# Send all logging output to Windows Event Logs. The default is false. +#logging.to_eventlog: false + +# If enabled, journalbeat periodically logs its internal metrics that have changed +# in the last period. For each metric that changed, the delta from the value at +# the beginning of the period is logged. Also, the total values for +# all non-zero internal metrics are logged on shutdown. The default is true. +#logging.metrics.enabled: true + +# The period after which to log the internal metrics. The default is 30s. +#logging.metrics.period: 30s + +# Logging to rotating files. Set logging.to_files to false to disable logging to +# files. +logging.to_files: true +logging.files: + # Configure the path where the logs are written. The default is the logs directory + # under the home path (the binary location). + #path: /var/log/journalbeat + + # The name of the files where the logs are written to. + #name: journalbeat + + # Configure log file size limit. If limit is reached, log file will be + # automatically rotated + #rotateeverybytes: 10485760 # = 10MB + + # Number of rotated log files to keep. Oldest files will be deleted first. + #keepfiles: 7 + + # The permissions mask to apply when rotating log files. The default value is 0600. + # Must be a valid Unix-style file permissions mask expressed in octal notation. + #permissions: 0600 + +# Set to true to log messages in json format. +#logging.json: false + + +#============================== Xpack Monitoring ===================================== +# journalbeat can export internal metrics to a central Elasticsearch monitoring cluster. +# This requires xpack monitoring to be enabled in Elasticsearch. +# The reporting is disabled by default. + +# Set to true to enable the monitoring reporter. +#xpack.monitoring.enabled: false + +# Uncomment to send the metrics to Elasticsearch. Most settings from the +# Elasticsearch output are accepted here as well. Any setting that is not set is +# automatically inherited from the Elasticsearch output configuration, so if you +# have the Elasticsearch output configured, you can simply uncomment the +# following line, and leave the rest commented out. +#xpack.monitoring.elasticsearch: + + # Array of hosts to connect to. + # Scheme and port can be left out and will be set to the default (http and 9200) + # In case you specify and additional path, the scheme is required: http://localhost:9200/path + # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200 + #hosts: ["localhost:9200"] + + # Set gzip compression level. + #compression_level: 0 + + # Optional protocol and basic auth credentials. + #protocol: "https" + #username: "beats_system" + #password: "changeme" + + # Dictionary of HTTP parameters to pass within the url with index operations. + #parameters: + #param1: value1 + #param2: value2 + + # Custom HTTP headers to add to each request + #headers: + # X-My-Header: Contents of the header + + # Proxy server url + #proxy_url: http://proxy:3128 + + # The number of times a particular Elasticsearch index operation is attempted. If + # the indexing operation doesn't succeed after this many retries, the events are + # dropped. The default is 3. + #max_retries: 3 + + # The maximum number of events to bulk in a single Elasticsearch bulk API index request. + # The default is 50. + #bulk_max_size: 50 + + # The number of seconds to wait before trying to reconnect to Elasticsearch + # after a network error. After waiting backoff.init seconds, the Beat + # tries to reconnect. If the attempt fails, the backoff timer is increased + # exponentially up to backoff.max. After a successful connection, the backoff + # timer is reset. The default is 1s. + #backoff.init: 1s + + # The maximum number of seconds to wait before attempting to connect to + # Elasticsearch after a network error. The default is 60s. + #backoff.max: 60s + + # Configure http request timeout before failing an request to Elasticsearch. + #timeout: 90 + + # Use SSL settings for HTTPS. + #ssl.enabled: true + + # Configure SSL verification mode. If `none` is configured, all server hosts + # and certificates will be accepted. In this mode, SSL based connections are + # susceptible to man-in-the-middle attacks. Use only for testing. Default is + # `full`. + #ssl.verification_mode: full + + # List of supported/valid TLS versions. By default all TLS versions 1.0 up to + # 1.2 are enabled. + #ssl.supported_protocols: [TLSv1.0, TLSv1.1, TLSv1.2] + + # SSL configuration. By default is off. + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client Certificate Key + #ssl.key: "/etc/pki/client/cert.key" + + # Optional passphrase for decrypting the Certificate Key. + #ssl.key_passphrase: '' + + # Configure cipher suites to be used for SSL connections + #ssl.cipher_suites: [] + + # Configure curve types for ECDHE based cipher suites + #ssl.curve_types: [] + + # Configure what types of renegotiation are supported. Valid options are + # never, once, and freely. Default is never. + #ssl.renegotiation: never + + #metrics.period: 10s + #state.period: 1m + +#================================ HTTP Endpoint ====================================== +# Each beat can expose internal metrics through a HTTP endpoint. For security +# reasons the endpoint is disabled by default. This feature is currently experimental. +# Stats can be access through http://localhost:5066/stats . For pretty JSON output +# append ?pretty to the URL. + +# Defines if the HTTP endpoint is enabled. +#http.enabled: false + +# The HTTP endpoint will bind to this hostname or IP address. It is recommended to use only localhost. +#http.host: localhost + +# Port on which the HTTP endpoint will bind. Default is 5066. +#http.port: 5066 + +#============================= Process Security ================================ + +# Enable or disable seccomp system call filtering on Linux. Default is enabled. +#seccomp.enabled: true diff --git a/journalbeat/journalbeat.yml b/journalbeat/journalbeat.yml new file mode 100644 index 000000000000..b1df03705bc6 --- /dev/null +++ b/journalbeat/journalbeat.yml @@ -0,0 +1,153 @@ +###################### Journalbeat Configuration Example ######################### + +# This file is an example configuration file highlighting only the most common +# options. The journalbeat.reference.yml file from the same directory contains all the +# supported options with more comments. You can use it as a reference. +# +# You can find the full configuration reference here: +# https://www.elastic.co/guide/en/beats/journalbeat/index.html + +# For more available modules and options, please see the journalbeat.reference.yml sample +# configuration file. + +#=========================== Journalbeat inputs ============================= + +journalbeat.inputs: + # Paths that should be crawled and fetched. Possible values files and directories. + # When setting a directory, all journals under it are merged. + # When empty starts to read from local journal. +- paths: [] + + # The number of seconds to wait before trying to read again from journals. + #backoff: 1s + # Multiplier of backoff value. + #backoff_factor: 2 + # The maximum number of seconds to wait before attempting to read again from journals. + #max_backoff: 60s + + # Position to start reading from journal. Valid values: head, tail, cursor + seek: tail + +#========================= Journalbeat global options ============================ +#journalbeat: + # Name of the registry file. If a relative path is used, it is considered relative to the + # data path. + #registry_file: registry + + # The number of seconds to wait before trying to read again from journals. + #backoff: 1s + # Multiplier of backoff value. + #backoff_factor: 2 + # The maximum number of seconds to wait before attempting to read again from journals. + #max_backoff: 60s + + # Position to start reading from all journal. Possible values: head, tail, cursor + #seek: head + +#================================ General ===================================== + +# The name of the shipper that publishes the network data. It can be used to group +# all the transactions sent by a single shipper in the web interface. +#name: + +# The tags of the shipper are included in their own field with each +# transaction published. +#tags: ["service-X", "web-tier"] + +# Optional fields that you can specify to add additional information to the +# output. +#fields: +# env: staging + + +#============================== Dashboards ===================================== +# These settings control loading the sample dashboards to the Kibana index. Loading +# the dashboards is disabled by default and can be enabled either by setting the +# options here, or by using the `-setup` CLI flag or the `setup` command. +#setup.dashboards.enabled: false + +# The URL from where to download the dashboards archive. By default this URL +# has a value which is computed based on the Beat name and version. For released +# versions, this URL points to the dashboard archive on the artifacts.elastic.co +# website. +#setup.dashboards.url: + +#============================== Kibana ===================================== + +# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API. +# This requires a Kibana endpoint configuration. +setup.kibana: + + # Kibana Host + # Scheme and port can be left out and will be set to the default (http and 5601) + # In case you specify and additional path, the scheme is required: http://localhost:5601/path + # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601 + #host: "localhost:5601" + +#============================= Elastic Cloud ================================== + +# These settings simplify using journalbeat with the Elastic Cloud (https://cloud.elastic.co/). + +# The cloud.id setting overwrites the `output.elasticsearch.hosts` and +# `setup.kibana.host` options. +# You can find the `cloud.id` in the Elastic Cloud web UI. +#cloud.id: + +# The cloud.auth setting overwrites the `output.elasticsearch.username` and +# `output.elasticsearch.password` settings. The format is `:`. +#cloud.auth: + +#================================ Outputs ===================================== + +# Configure what output to use when sending the data collected by the beat. + +#-------------------------- Elasticsearch output ------------------------------ +output.elasticsearch: + # Array of hosts to connect to. + hosts: ["localhost:9200"] + + # Optional protocol and basic auth credentials. + #protocol: "https" + #username: "elastic" + #password: "changeme" + +#----------------------------- Logstash output -------------------------------- +#output.logstash: + # The Logstash hosts + #hosts: ["localhost:5044"] + + # Optional SSL. By default is off. + # List of root certificates for HTTPS server verifications + #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] + + # Certificate for SSL client authentication + #ssl.certificate: "/etc/pki/client/cert.pem" + + # Client Certificate Key + #ssl.key: "/etc/pki/client/cert.key" + +#================================ Logging ===================================== + +# Sets log level. The default log level is info. +# Available log levels are: error, warning, info, debug +#logging.level: debug + +# At debug level, you can selectively enable logging only for some components. +# To enable all selectors use ["*"]. Examples of other selectors are "beat", +# "publish", "service". +#logging.selectors: ["*"] + +#============================== Xpack Monitoring =============================== +# journalbeat can export internal metrics to a central Elasticsearch monitoring +# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The +# reporting is disabled by default. + +# Set to true to enable the monitoring reporter. +#xpack.monitoring.enabled: false + +# Uncomment to send the metrics to Elasticsearch. Most settings from the +# Elasticsearch output are accepted here as well. Any setting that is not set is +# automatically inherited from the Elasticsearch output configuration, so if you +# have the Elasticsearch output configured, you can simply uncomment the +# following line. +#xpack.monitoring.elasticsearch: diff --git a/journalbeat/magefile.go b/journalbeat/magefile.go new file mode 100644 index 000000000000..8dfa8c877547 --- /dev/null +++ b/journalbeat/magefile.go @@ -0,0 +1,115 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +// +build mage + +package main + +import ( + "context" + "fmt" + "time" + + "github.com/magefile/mage/mg" + "github.com/magefile/mage/sh" + + "github.com/elastic/beats/dev-tools/mage" +) + +func init() { + mage.BeatDescription = "Journalbeat ships systemd journal entries to Elasticsearch or Logstash." + + // TODO filter platforms +} + +// Build builds the Beat binary. +func Build() error { + return mage.Build(mage.DefaultBuildArgs()) +} + +// GolangCrossBuild build the Beat binary inside of the golang-builder. +// Do not use directly, use crossBuild instead. +func GolangCrossBuild() error { + return mage.GolangCrossBuild(mage.DefaultGolangCrossBuildArgs()) +} + +// BuildGoDaemon builds the go-daemon binary (use crossBuildGoDaemon). +func BuildGoDaemon() error { + return mage.BuildGoDaemon() +} + +// CrossBuild cross-builds the beat for all target platforms. +func CrossBuild() error { + return mage.CrossBuild() +} + +// CrossBuildXPack cross-builds the beat with XPack for all target platforms. +func CrossBuildXPack() error { + return mage.CrossBuildXPack() +} + +// CrossBuildGoDaemon cross-builds the go-daemon binary using Docker. +func CrossBuildGoDaemon() error { + return mage.CrossBuildGoDaemon() +} + +// Clean cleans all generated files and build artifacts. +func Clean() error { + return mage.Clean() +} + +// Package packages the Beat for distribution. +// Use SNAPSHOT=true to build snapshots. +// Use PLATFORMS to control the target platforms. +func Package() { + start := time.Now() + defer func() { fmt.Println("package ran for", time.Since(start)) }() + + mage.UseElasticBeatPackaging() + mg.Deps(Update) + mg.Deps(CrossBuild, CrossBuildXPack, CrossBuildGoDaemon) + mg.SerialDeps(mage.Package, TestPackages) +} + +// TestPackages tests the generated packages (i.e. file modes, owners, groups). +func TestPackages() error { + return mage.TestPackages() +} + +// Update updates the generated files (aka make update). +func Update() error { + return sh.Run("make", "update") +} + +// Fields generates a fields.yml for the Beat. +func Fields() error { + return mage.GenerateFieldsYAML() +} + +// GoTestUnit executes the Go unit tests. +// Use TEST_COVERAGE=true to enable code coverage profiling. +// Use RACE_DETECTOR=true to enable the race detector. +func GoTestUnit(ctx context.Context) error { + return mage.GoTest(ctx, mage.DefaultGoTestUnitArgs()) +} + +// GoTestIntegration executes the Go integration tests. +// Use TEST_COVERAGE=true to enable code coverage profiling. +// Use RACE_DETECTOR=true to enable the race detector. +func GoTestIntegration(ctx context.Context) error { + return mage.GoTest(ctx, mage.DefaultGoTestIntegrationArgs()) +} diff --git a/journalbeat/main.go b/journalbeat/main.go new file mode 100644 index 000000000000..6bfbba776b2d --- /dev/null +++ b/journalbeat/main.go @@ -0,0 +1,30 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package main + +import ( + "os" + + "github.com/elastic/beats/journalbeat/cmd" +) + +func main() { + if err := cmd.RootCmd.Execute(); err != nil { + os.Exit(1) + } +} diff --git a/journalbeat/main_test.go b/journalbeat/main_test.go new file mode 100644 index 000000000000..c6eb420ed2c8 --- /dev/null +++ b/journalbeat/main_test.go @@ -0,0 +1,44 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package main + +// This file is mandatory as otherwise the journalbeat.test binary is not generated correctly. + +import ( + "flag" + "testing" + + "github.com/elastic/beats/journalbeat/cmd" +) + +var systemTest *bool + +func init() { + systemTest = flag.Bool("systemTest", false, "Set to true when running system tests") + + cmd.RootCmd.PersistentFlags().AddGoFlag(flag.CommandLine.Lookup("systemTest")) + cmd.RootCmd.PersistentFlags().AddGoFlag(flag.CommandLine.Lookup("test.coverprofile")) +} + +// Test started when the test binary is started. Only calls main. +func TestSystem(t *testing.T) { + + if *systemTest { + main() + } +} diff --git a/journalbeat/reader/fields.go b/journalbeat/reader/fields.go new file mode 100644 index 000000000000..23a82a0f0103 --- /dev/null +++ b/journalbeat/reader/fields.go @@ -0,0 +1,70 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package reader + +import "github.com/coreos/go-systemd/sdjournal" + +var ( + journaldEventFields = map[string]string{ + "COREDUMP_UNIT": "coredump.unit", + "COREDUMP_USER_UNIT": "coredump.user_unit", + "OBJECT_AUDIT_LOGINUID": "object.audit.login_uid", + "OBJECT_AUDIT_SESSION": "object.audit.session", + "OBJECT_CMDLINE": "object.cmd", + "OBJECT_COMM": "object.name", + "OBJECT_EXE": "object.executable", + "OBJECT_GID": "object.gid", + "OBJECT_PID": "object.pid", + "OBJECT_SYSTEMD_OWNER_UID": "object.systemd.owner_uid", + "OBJECT_SYSTEMD_SESSION": "object.systemd.session", + "OBJECT_SYSTEMD_UNIT": "object.systemd.unit", + "OBJECT_SYSTEMD_USER_UNIT": "object.systemd.user_unit", + "OBJECT_UID": "object.uid", + "_KERNEL_DEVICE": "kernel.device", + "_KERNEL_SUBSYSTEM": "kernel.subsystem", + "_SYSTEMD_INVOCATION_ID": "sytemd.invocation_id", + "_UDEV_DEVLINK": "kernel.device_symlinks", // TODO aggregate multiple elements + "_UDEV_DEVNODE": "kernel.device_node_path", + "_UDEV_SYSNAME": "kernel.device_name", + sdjournal.SD_JOURNAL_FIELD_AUDIT_LOGINUID: "process.audit.login_uid", + sdjournal.SD_JOURNAL_FIELD_AUDIT_SESSION: "process.audit.session", + sdjournal.SD_JOURNAL_FIELD_BOOT_ID: "host.boot_id", + sdjournal.SD_JOURNAL_FIELD_CMDLINE: "process.cmd", + sdjournal.SD_JOURNAL_FIELD_CODE_FILE: "code.file", + sdjournal.SD_JOURNAL_FIELD_CODE_FUNC: "code.func", + sdjournal.SD_JOURNAL_FIELD_CODE_LINE: "code.line", + sdjournal.SD_JOURNAL_FIELD_COMM: "process.name", + sdjournal.SD_JOURNAL_FIELD_EXE: "process.executable", + sdjournal.SD_JOURNAL_FIELD_GID: "process.uid", + sdjournal.SD_JOURNAL_FIELD_HOSTNAME: "host.name", + sdjournal.SD_JOURNAL_FIELD_MACHINE_ID: "host.id", + sdjournal.SD_JOURNAL_FIELD_PID: "process.pid", + sdjournal.SD_JOURNAL_FIELD_PRIORITY: "syslog.priority", + sdjournal.SD_JOURNAL_FIELD_SYSLOG_FACILITY: "syslog.facility", + sdjournal.SD_JOURNAL_FIELD_SYSLOG_IDENTIFIER: "syslog.identifier", + sdjournal.SD_JOURNAL_FIELD_SYSTEMD_CGROUP: "systemd.cgroup", + sdjournal.SD_JOURNAL_FIELD_SYSTEMD_OWNER_UID: "systemd.owner_uid", + sdjournal.SD_JOURNAL_FIELD_SYSTEMD_SESSION: "systemd.session", + sdjournal.SD_JOURNAL_FIELD_SYSTEMD_SLICE: "systemd.slice", + sdjournal.SD_JOURNAL_FIELD_SYSTEMD_UNIT: "systemd.unit", + sdjournal.SD_JOURNAL_FIELD_SYSTEMD_USER_UNIT: "systemd.user_unit", + sdjournal.SD_JOURNAL_FIELD_TRANSPORT: "systemd.transport", + sdjournal.SD_JOURNAL_FIELD_UID: "process.uid", + sdjournal.SD_JOURNAL_FIELD_MESSAGE: "message", + } +) diff --git a/journalbeat/reader/journal.go b/journalbeat/reader/journal.go new file mode 100644 index 000000000000..22aeb56a9246 --- /dev/null +++ b/journalbeat/reader/journal.go @@ -0,0 +1,264 @@ +// Licensed to Elasticsearch B.V. under one or more contributor +// license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright +// ownership. Elasticsearch B.V. licenses this file to you under +// the Apache License, Version 2.0 (the "License"); you may +// not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package reader + +import ( + "io" + "os" + "time" + + "github.com/coreos/go-systemd/sdjournal" + "github.com/pkg/errors" + + "github.com/elastic/beats/journalbeat/checkpoint" + "github.com/elastic/beats/libbeat/beat" + "github.com/elastic/beats/libbeat/common" + "github.com/elastic/beats/libbeat/logp" +) + +const ( + // LocalSystemJournalID is the ID of the local system journal. + LocalSystemJournalID = "LOCAL_SYSTEM_JOURNAL" +) + +// Config stores the options of a reder. +type Config struct { + // Path is the path to the journal file. + Path string + // Seek specifies the seeking stategy. + // Possible values: head, tail, cursor. + Seek string + // MaxBackoff is the limit of the backoff time. + MaxBackoff time.Duration + // Backoff is the current interval to wait before + // attemting to read again from the journal. + Backoff time.Duration + // BackoffFactor is the multiplier of Backoff. + BackoffFactor int +} + +// Reader reads entries from journal(s). +type Reader struct { + journal *sdjournal.Journal + config Config + changes chan int + done chan struct{} + logger *logp.Logger +} + +// New creates a new journal reader and moves the FP to the configured position. +func New(c Config, done chan struct{}, state checkpoint.JournalState, logger *logp.Logger) (*Reader, error) { + f, err := os.Stat(c.Path) + if err != nil { + return nil, errors.Wrap(err, "failed to open file") + } + + logger = logger.With("path", c.Path) + + var j *sdjournal.Journal + if f.IsDir() { + j, err = sdjournal.NewJournalFromDir(c.Path) + if err != nil { + return nil, errors.Wrap(err, "failed to open journal directory") + } + } else { + j, err = sdjournal.NewJournalFromFiles(c.Path) + if err != nil { + return nil, errors.Wrap(err, "failed to open journal file") + } + } + + r := &Reader{ + journal: j, + changes: make(chan int), + config: c, + done: done, + logger: logger, + } + r.seek(state.Cursor) + + r.logger.Debug("New journal is opened for reading") + + return r, nil +} + +// NewLocal creates a reader to read form the local journal and moves the FP +// to the configured position. +func NewLocal(c Config, done chan struct{}, state checkpoint.JournalState, logger *logp.Logger) (*Reader, error) { + j, err := sdjournal.NewJournal() + if err != nil { + return nil, errors.Wrap(err, "failed to open local journal") + } + + logger = logger.With("path", "local") + logger.Debug("New local journal is opened for reading") + + r := &Reader{ + journal: j, + changes: make(chan int), + config: c, + done: done, + logger: logger, + } + r.seek(state.Cursor) + return r, nil +} + +// seek seeks to the position determined by the coniguration and cursor state. +func (r *Reader) seek(cursor string) { + if r.config.Seek == "cursor" { + if cursor == "" { + r.journal.SeekHead() + r.logger.Debug("Seeking method set to cursor, but no state is saved for reader. Starting to read from the beginning") + return + } + r.journal.SeekCursor(cursor) + _, err := r.journal.Next() + if err != nil { + r.logger.Error("Error while seeking to cursor") + } + r.logger.Debug("Seeked to position defined in cursor") + } else if r.config.Seek == "tail" { + r.journal.SeekTail() + r.logger.Debug("Tailing the journal file") + } else if r.config.Seek == "head" { + r.journal.SeekHead() + r.logger.Debug("Reading from the beginning of the journal file") + } +} + +// Follow reads entries from journals. +func (r *Reader) Follow() chan *beat.Event { + out := make(chan *beat.Event) + go r.readEntriesFromJournal(out) + + return out +} + +func (r *Reader) readEntriesFromJournal(entries chan *beat.Event) { + defer close(entries) + +process: + for { + select { + case <-r.done: + return + default: + err := r.readUntilNotNull(entries) + if err != nil { + r.logger.Error("Unexpected error while reading from journal: %v", err) + } + } + + for { + go r.stopOrWait() + + select { + case <-r.done: + return + case e := <-r.changes: + switch e { + case sdjournal.SD_JOURNAL_NOP: + r.wait() + case sdjournal.SD_JOURNAL_APPEND, sdjournal.SD_JOURNAL_INVALIDATE: + continue process + default: + if e < 0 { + //r.logger.Error("Unexpected error: %v", syscall.Errno(-e)) + } + r.wait() + } + } + } + } +} + +func (r *Reader) readUntilNotNull(entries chan<- *beat.Event) error { + n, err := r.journal.Next() + if err != nil && err != io.EOF { + return err + } + + for n != 0 { + entry, err := r.journal.GetEntry() + if err != nil { + return err + } + event := r.toEvent(entry) + entries <- event + + n, err = r.journal.Next() + if err != nil && err != io.EOF { + return err + } + } + return nil +} + +// toEvent creates a beat.Event from journal entries. +func (r *Reader) toEvent(entry *sdjournal.JournalEntry) *beat.Event { + fields := common.MapStr{} + for journalKey, eventKey := range journaldEventFields { + if entry.Fields[journalKey] != "" { + fields.Put(eventKey, entry.Fields[journalKey]) + } + } + + state := checkpoint.JournalState{ + Path: r.config.Path, + Cursor: entry.Cursor, + RealtimeTimestamp: entry.RealtimeTimestamp, + MonotonicTimestamp: entry.MonotonicTimestamp, + } + + event := beat.Event{ + Timestamp: time.Now(), + Fields: fields, + Private: state, + } + return &event +} + +// stopOrWait waits for a journal event. +func (r *Reader) stopOrWait() { + select { + case <-r.done: + case r.changes <- r.journal.Wait(100 * time.Millisecond): + } +} + +func (r *Reader) wait() { + select { + case <-r.done: + return + case <-time.After(r.config.Backoff): + } + + if r.config.Backoff < r.config.MaxBackoff { + r.config.Backoff = r.config.Backoff * time.Duration(r.config.BackoffFactor) + if r.config.Backoff > r.config.MaxBackoff { + r.config.Backoff = r.config.MaxBackoff + } + r.logger.Debugf("Increasing backoff time to: %v factor: %v", r.config.Backoff, r.config.BackoffFactor) + } +} + +// Close closes the underlying journal reader. +func (r *Reader) Close() { + r.journal.Close() +} diff --git a/journalbeat/tests/system/config/journalbeat.yml.j2 b/journalbeat/tests/system/config/journalbeat.yml.j2 new file mode 100644 index 000000000000..e2cefdcb6d02 --- /dev/null +++ b/journalbeat/tests/system/config/journalbeat.yml.j2 @@ -0,0 +1,81 @@ +################### Beat Configuration ######################### +journalbeat.inputs: +- paths: [{{ journal_path }}] + seek: {{ seek_method }} + +journalbeat.registry: {{ registry_file }} + +############################# Output ########################################## + +# Configure what outputs to use when sending the data collected by the beat. +# You can enable one or multiple outputs by setting enabled option to true. +output: + + ### File as output + file: + # Enabling file output + enabled: true + + # Path to the directory where to save the generated files. The option is mandatory. + path: {{ output_file_path|default(beat.working_dir + "/output") }} + + + # Name of the generated files. The default is `journalbeat` and it generates + # files: `journalbeat`, `journalbeat.1`, `journalbeat.2`, etc. + filename: {{ output_file_filename|default("journalbeat") }} + + # Maximum size in kilobytes of each file. When this size is reached, the files are + # rotated. The default value is 10 MB. + #rotate_every_kb: 10000 + + # Maximum number of files under path. When this number of files is reached, the + # oldest file is deleted and the rest are shifted from last to first. The default + # is 7 files. + #number_of_files: 7 + + + +############################# Beat ######################################### + +# The name of the shipper that publishes the network data. It can be used to group +# all the transactions sent by a single shipper in the web interface. +# If this options is not defined, the hostname is used. +#name: + +# The tags of the shipper are included in their own field with each +# transaction published. Tags make it easy to group servers by different +# logical properties. +#tags: ["service-X", "web-tier"] + + + +############################# Logging ######################################### + +#logging: + # Send all logging output to syslog. On Windows default is false, otherwise + # default is true. + #to_syslog: true + + # Write all logging output to files. Beats automatically rotate files if configurable + # limit is reached. + #to_files: false + + # Enable debug output for selected components. + #selectors: [] + + # Set log level + #level: error + + #files: + # The directory where the log files will written to. + #path: /var/log/journalbeat + + # The name of the files where the logs are written to. + #name: journalbeat + + # Configure log file size limit. If limit is reached, log file will be + # automatically rotated + #rotateeverybytes: 10485760 # = 10MB + + # Number of rotated log files to keep. Oldest files will be deleted first. + #keepfiles: 7 diff --git a/journalbeat/tests/system/input/test.journal b/journalbeat/tests/system/input/test.journal new file mode 100644 index 0000000000000000000000000000000000000000..887d49179056bdc21618db72fa1d6ddd2653418b GIT binary patch literal 8388608 zcmeF)eQZ_b836DqAa$7&f@FS88}(kf7FYlP`iOM6Q@N?TeKbR!at zkwmvJmt`($f-c!G85tU9oJ*vD2r44}v8eG2(G6ovtl30EbgcBe56-I`rX3V~d{gzHr5c_g{anc-zA(M{gd|0*@HCE_S3SbE1t`aPspAYoRs3$G1-I7DGu9|wHv-? z`h6qj4k@^CS6yGc_3FJnXZ1E6p8B1fVuTUt!H^VpqMndcI#ZzZcT`rHEg3yn*gNI8E3S-)9}@ z1k6W(009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAJpaOSpDeRjs;($(WG6Dn$5FkK+009C72oNAZ zfB*pk1dc0#soS#eM@BrZ7H_cx2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAJU;@L|XMgV| z;($$TA_4>m5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1de-wwVMk23?d%)AOHvuAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7csfv;bnJNy5Y3L<{}8OeYE z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXFd{*Fz^z&{JKik3) z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0tAi`f#uzW zeJhE0j4aDC2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZV1Na> zAIp9oE#d%AZ#n`52oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!Csj1y--m{{O{@2kf8$0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72pkrHl}}}V4=&f_nxf;b-i>T0U4sGe3{H?wM1Rc+O* z%4v1Al{3q0YRhKMo-%e^adB};$;8r<31d%89jB%E+U~pd?CdEU9OF&hmW}@f?5gV8DGPF)ZMoKTJn|cto=?8F z{>6KKv~zll*Rd|;x4(b(O5}5CYB>Cscb1;E@wPulzlf2~mZ!6EV!qkcl~vW1wR5MG zWG&-(aq9SoD+@YK`Q8VY#5ily?;}OOm_I!J`TFx`eE9N5_eT4OG5_*BkD9qPGpeT7 zT~ao!a>l+iV&2GOQX1ff>uWpyRy<=+jJLcyTSs31g2*Q}!FDU(#08(cab7GOsmP zw75Cfxv;*qsJ^wnb74_?XVHS(_4C^6I~$8~*X7z47j@izOKNcCefMvCX6bKdM*E17e@~uk-H)rPYU?U5o-(<3Tw_D&_>!jL zNsT2td|!%96UOIC8zxRFo?OyUQZl}ww6tMd(}djE zrm^FT8*=j|?i(xe8=nT={eIK@HLt#18}k%y%*L<#WZb@&BG2m7@QmipYtA3sk~NOb zy)p8^^2B#V^B#&47oP)iT>$@IH9(u z=bKYO4Bs>Tz7cbW6r9<teOL5-Dbt2B2I8Rrn^C-@b*ypIXaE<+qb%kT>XK7l`ThCI zvgfm(%;JYnWij@pIrY0~=1jc%Gx`^$e&PNF_l^I*50%6`alM|le9qf* zM{OJ#^Asenm}gF!CvH@6-JLQa>k;FnGAQ`y`=_3B`x_&(MfElb&loS}iGKZFSDNeF z8e4OXMHQEo&rR2k;-wRd=cN-ezizgqx>+=+qOWdlSo-;O^XkT{&fVMo?VI!K7ilr_8_VyX|M*AN5y|cOI@t@Z$EOmU#h!)j`p3RErvpRoe(BezpZn0m@p_z>aovsM(eHrQ(TT<5CykvnA%CBq zO8Yc)S6$!rXV}$yKli>hbWG7d)}OGeHh-Vux;s$!sg>!v9zOkf=R1X0#6#^D-myBChqUes%U^Yn8axVEjo`%`)Y zlwTiDr~1gH`dHXceVl(&@jKtD_}AwA`Z(P8r~La-X?njM`w;KH!z=bVKG%d_xaFUR z@%n*c>`&A|_o$z}QvTYTKZt&jSN{0X?;z`-G=D#~rv11v?Z>VC?8k)l+dg>oqBTDq z=>3SgjO%E8e;|DN+mC#wa6H&(+)V0db%@U$vEN^{9%3Ak009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+ z009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBly zK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF z5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk z1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs z0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZ zfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&U zAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C7 z2oNAZfB*pk1PBlyK!5-N0t5&UAV7cs0RjXF5FkK+009C72oNAZfB*pk1PBlyK!5-N z0t5*Bj~zM?000000P=rrgaii;95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n? z4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj! z0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^` zz<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!K zaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB) z95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c z2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*= zfddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede z;J|?c2M!!KaNxj!0|yQqIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQq zIB?*=fddB)95`^`z<~n?4jede;J|?c2M!!KaNxj!0|yQqIB?*=0qxMS00000z#xC? zY3wM33>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5 zV8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM z7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b* z1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd z0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwA zz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEj zFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r z3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@ z0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VK zfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5 zV8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM z7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b* z1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd z0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwA zz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEj zFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r z3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@ z0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VK zfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5 zV8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM z7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b* z1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd z0RsjM7%*VKfB^#r3>YwAz<>b*1`HT5V8DO@0|pEjFkrxd0RsjM7%*VKfB^#r3>YwA Pz<>b*1`HT5VBk3rU?TCM literal 0 HcmV?d00001 diff --git a/journalbeat/tests/system/input/test.registry b/journalbeat/tests/system/input/test.registry new file mode 100644 index 000000000000..5c6680edb42b --- /dev/null +++ b/journalbeat/tests/system/input/test.registry @@ -0,0 +1,6 @@ +update_time: 2018-09-11T10:06:50.895829905Z +journal_entries: +- path: /home/n/go/src/github.com/elastic/beats/journalbeat/tests/system/input/test.journal + cursor: s=7d22fd7aa0c7482d88c303f47d5f32dc;i=2fcb;b=902dc834f07d4f41ade064f6b2ef8b4f;m=1bf0ff5c6d;t=55913a25fe765;x=c7e6480eec30822b + realtime_timestamp: 1505315746998117 + monotonic_timestamp: 120007384173 diff --git a/journalbeat/tests/system/journalbeat.py b/journalbeat/tests/system/journalbeat.py new file mode 100644 index 000000000000..11381395e29c --- /dev/null +++ b/journalbeat/tests/system/journalbeat.py @@ -0,0 +1,13 @@ +import os +import sys +sys.path.append(os.path.join(os.path.dirname(__file__), '../../../libbeat/tests/system')) +from beat.beat import TestCase + + +class BaseTest(TestCase): + + @classmethod + def setUpClass(self): + self.beat_name = "journalbeat" + self.beat_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../")) + super(BaseTest, self).setUpClass() diff --git a/journalbeat/tests/system/test_base.py b/journalbeat/tests/system/test_base.py new file mode 100644 index 000000000000..7b7d16aefb4e --- /dev/null +++ b/journalbeat/tests/system/test_base.py @@ -0,0 +1,114 @@ +from journalbeat import BaseTest + +import os +import sys +import unittest +import time + + +class Test(BaseTest): + + @unittest.skipUnless(sys.platform.startswith("linux"), "Journald only on Linux") + def test_start_with_local_journal(self): + """ + Journalbeat is able to start with the local journal. + """ + + self.render_config_template( + path=os.path.abspath(self.working_dir) + "/log/*" + ) + journalbeat_proc = self.start_beat() + + self.wait_until(lambda: self.log_contains( + "journalbeat is running"), max_timeout=10) + + exit_code = journalbeat_proc.kill_and_wait() + assert exit_code == 0 + + @unittest.skipUnless(sys.platform.startswith("linux"), "Journald only on Linux") + def test_start_with_journal_directory(self): + """ + Journalbeat is able to open a directory of journal files and starts tailing them. + """ + + self.render_config_template( + journal_path=self.beat_path + "/tests/system/input/", + path=os.path.abspath(self.working_dir) + "/log/*" + ) + journalbeat_proc = self.start_beat() + + required_log_snippets = [ + # journalbeat can be started + "journalbeat is running", + # journalbeat can seek to the position defined in the cursor + "Tailing the journal file", + ] + for snippet in required_log_snippets: + self.wait_until(lambda: self.log_contains(snippet), + name="Line in '{}' Journalbeat log".format(snippet)) + + exit_code = journalbeat_proc.kill_and_wait() + assert exit_code == 0 + + @unittest.skipUnless(sys.platform.startswith("linux"), "Journald only on Linux") + def test_start_with_selected_journal_file(self): + """ + Journalbeat is able to open a journal file and start to read it from the begining. + """ + + self.render_config_template( + journal_path=self.beat_path + "/tests/system/input/test.journal", + seek_method="head", + path=os.path.abspath(self.working_dir) + "/log/*" + ) + journalbeat_proc = self.start_beat() + + required_log_snippets = [ + # journalbeat can be started + "journalbeat is running", + # journalbeat can seek to the position defined in the cursor + "Reading from the beginning of the journal file", + # message can be read from test journal + "\"message\": \"thinkpad_acpi: unhandled HKEY event 0x60b0\"", + ] + for snippet in required_log_snippets: + self.wait_until(lambda: self.log_contains(snippet), + name="Line in '{}' Journalbeat log".format(snippet)) + + exit_code = journalbeat_proc.kill_and_wait() + assert exit_code == 0 + + @unittest.skipUnless(sys.platform.startswith("linux"), "Journald only on Linux") + def test_read_events_with_existing_registry(self): + """ + Journalbeat is able to follow reading a from a journal with an existing registry file. + """ + + self.render_config_template( + journal_path=self.beat_path + "/tests/system/input/test.journal", + seek_method="cursor", + registry_file=self.beat_path + "/tests/system/input/test.registry", + path=os.path.abspath(self.working_dir) + "/log/*", + ) + journalbeat_proc = self.start_beat() + + required_log_snippets = [ + # journalbeat can be started + "journalbeat is running", + # journalbeat can seek to the position defined in the cursor + "Seeked to position defined in cursor", + # message can be read from test journal + "please report the conditions when this event happened to", + # only one event is read and published + "journalbeat successfully published 1 events", + ] + for snippet in required_log_snippets: + self.wait_until(lambda: self.log_contains(snippet), + name="Line in '{}' Journalbeat log".format(snippet)) + + exit_code = journalbeat_proc.kill_and_wait() + assert exit_code == 0 + + +if __name__ == '__main__': + unittest.main() diff --git a/vendor/github.com/coreos/go-systemd/LICENSE b/vendor/github.com/coreos/go-systemd/LICENSE new file mode 100644 index 000000000000..37ec93a14fdc --- /dev/null +++ b/vendor/github.com/coreos/go-systemd/LICENSE @@ -0,0 +1,191 @@ +Apache License +Version 2.0, January 2004 +http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +1. Definitions. + +"License" shall mean the terms and conditions for use, reproduction, and +distribution as defined by Sections 1 through 9 of this document. + +"Licensor" shall mean the copyright owner or entity authorized by the copyright +owner that is granting the License. + +"Legal Entity" shall mean the union of the acting entity and all other entities +that control, are controlled by, or are under common control with that entity. +For the purposes of this definition, "control" means (i) the power, direct or +indirect, to cause the direction or management of such entity, whether by +contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the +outstanding shares, or (iii) beneficial ownership of such entity. + +"You" (or "Your") shall mean an individual or Legal Entity exercising +permissions granted by this License. + +"Source" form shall mean the preferred form for making modifications, including +but not limited to software source code, documentation source, and configuration +files. + +"Object" form shall mean any form resulting from mechanical transformation or +translation of a Source form, including but not limited to compiled object code, +generated documentation, and conversions to other media types. + +"Work" shall mean the work of authorship, whether in Source or Object form, made +available under the License, as indicated by a copyright notice that is included +in or attached to the work (an example is provided in the Appendix below). + +"Derivative Works" shall mean any work, whether in Source or Object form, that +is based on (or derived from) the Work and for which the editorial revisions, +annotations, elaborations, or other modifications represent, as a whole, an +original work of authorship. For the purposes of this License, Derivative Works +shall not include works that remain separable from, or merely link (or bind by +name) to the interfaces of, the Work and Derivative Works thereof. + +"Contribution" shall mean any work of authorship, including the original version +of the Work and any modifications or additions to that Work or Derivative Works +thereof, that is intentionally submitted to Licensor for inclusion in the Work +by the copyright owner or by an individual or Legal Entity authorized to submit +on behalf of the copyright owner. For the purposes of this definition, +"submitted" means any form of electronic, verbal, or written communication sent +to the Licensor or its representatives, including but not limited to +communication on electronic mailing lists, source code control systems, and +issue tracking systems that are managed by, or on behalf of, the Licensor for +the purpose of discussing and improving the Work, but excluding communication +that is conspicuously marked or otherwise designated in writing by the copyright +owner as "Not a Contribution." + +"Contributor" shall mean Licensor and any individual or Legal Entity on behalf +of whom a Contribution has been received by Licensor and subsequently +incorporated within the Work. + +2. Grant of Copyright License. + +Subject to the terms and conditions of this License, each Contributor hereby +grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, +irrevocable copyright license to reproduce, prepare Derivative Works of, +publicly display, publicly perform, sublicense, and distribute the Work and such +Derivative Works in Source or Object form. + +3. Grant of Patent License. + +Subject to the terms and conditions of this License, each Contributor hereby +grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, +irrevocable (except as stated in this section) patent license to make, have +made, use, offer to sell, sell, import, and otherwise transfer the Work, where +such license applies only to those patent claims licensable by such Contributor +that are necessarily infringed by their Contribution(s) alone or by combination +of their Contribution(s) with the Work to which such Contribution(s) was +submitted. If You institute patent litigation against any entity (including a +cross-claim or counterclaim in a lawsuit) alleging that the Work or a +Contribution incorporated within the Work constitutes direct or contributory +patent infringement, then any patent licenses granted to You under this License +for that Work shall terminate as of the date such litigation is filed. + +4. Redistribution. + +You may reproduce and distribute copies of the Work or Derivative Works thereof +in any medium, with or without modifications, and in Source or Object form, +provided that You meet the following conditions: + +You must give any other recipients of the Work or Derivative Works a copy of +this License; and +You must cause any modified files to carry prominent notices stating that You +changed the files; and +You must retain, in the Source form of any Derivative Works that You distribute, +all copyright, patent, trademark, and attribution notices from the Source form +of the Work, excluding those notices that do not pertain to any part of the +Derivative Works; and +If the Work includes a "NOTICE" text file as part of its distribution, then any +Derivative Works that You distribute must include a readable copy of the +attribution notices contained within such NOTICE file, excluding those notices +that do not pertain to any part of the Derivative Works, in at least one of the +following places: within a NOTICE text file distributed as part of the +Derivative Works; within the Source form or documentation, if provided along +with the Derivative Works; or, within a display generated by the Derivative +Works, if and wherever such third-party notices normally appear. The contents of +the NOTICE file are for informational purposes only and do not modify the +License. You may add Your own attribution notices within Derivative Works that +You distribute, alongside or as an addendum to the NOTICE text from the Work, +provided that such additional attribution notices cannot be construed as +modifying the License. +You may add Your own copyright statement to Your modifications and may provide +additional or different license terms and conditions for use, reproduction, or +distribution of Your modifications, or for any such Derivative Works as a whole, +provided Your use, reproduction, and distribution of the Work otherwise complies +with the conditions stated in this License. + +5. Submission of Contributions. + +Unless You explicitly state otherwise, any Contribution intentionally submitted +for inclusion in the Work by You to the Licensor shall be under the terms and +conditions of this License, without any additional terms or conditions. +Notwithstanding the above, nothing herein shall supersede or modify the terms of +any separate license agreement you may have executed with Licensor regarding +such Contributions. + +6. Trademarks. + +This License does not grant permission to use the trade names, trademarks, +service marks, or product names of the Licensor, except as required for +reasonable and customary use in describing the origin of the Work and +reproducing the content of the NOTICE file. + +7. Disclaimer of Warranty. + +Unless required by applicable law or agreed to in writing, Licensor provides the +Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, +including, without limitation, any warranties or conditions of TITLE, +NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are +solely responsible for determining the appropriateness of using or +redistributing the Work and assume any risks associated with Your exercise of +permissions under this License. + +8. Limitation of Liability. + +In no event and under no legal theory, whether in tort (including negligence), +contract, or otherwise, unless required by applicable law (such as deliberate +and grossly negligent acts) or agreed to in writing, shall any Contributor be +liable to You for damages, including any direct, indirect, special, incidental, +or consequential damages of any character arising as a result of this License or +out of the use or inability to use the Work (including but not limited to +damages for loss of goodwill, work stoppage, computer failure or malfunction, or +any and all other commercial damages or losses), even if such Contributor has +been advised of the possibility of such damages. + +9. Accepting Warranty or Additional Liability. + +While redistributing the Work or Derivative Works thereof, You may choose to +offer, and charge a fee for, acceptance of support, warranty, indemnity, or +other liability obligations and/or rights consistent with this License. However, +in accepting such obligations, You may act only on Your own behalf and on Your +sole responsibility, not on behalf of any other Contributor, and only if You +agree to indemnify, defend, and hold each Contributor harmless for any liability +incurred by, or claims asserted against, such Contributor by reason of your +accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS + +APPENDIX: How to apply the Apache License to your work + +To apply the Apache License to your work, attach the following boilerplate +notice, with the fields enclosed by brackets "[]" replaced with your own +identifying information. (Don't include the brackets!) The text should be +enclosed in the appropriate comment syntax for the file format. We also +recommend that a file or class name and description of purpose be included on +the same "printed page" as the copyright notice for easier identification within +third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/vendor/github.com/coreos/go-systemd/NOTICE b/vendor/github.com/coreos/go-systemd/NOTICE new file mode 100644 index 000000000000..23a0ada2fbb5 --- /dev/null +++ b/vendor/github.com/coreos/go-systemd/NOTICE @@ -0,0 +1,5 @@ +CoreOS Project +Copyright 2018 CoreOS, Inc + +This product includes software developed at CoreOS, Inc. +(http://www.coreos.com/). diff --git a/vendor/github.com/coreos/go-systemd/sdjournal/functions.go b/vendor/github.com/coreos/go-systemd/sdjournal/functions.go new file mode 100644 index 000000000000..e132369c1274 --- /dev/null +++ b/vendor/github.com/coreos/go-systemd/sdjournal/functions.go @@ -0,0 +1,66 @@ +// Copyright 2015 RedHat, Inc. +// Copyright 2015 CoreOS, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package sdjournal + +import ( + "github.com/coreos/pkg/dlopen" + "sync" + "unsafe" +) + +var ( + // lazy initialized + libsystemdHandle *dlopen.LibHandle + + libsystemdMutex = &sync.Mutex{} + libsystemdFunctions = map[string]unsafe.Pointer{} + libsystemdNames = []string{ + // systemd < 209 + "libsystemd-journal.so.0", + "libsystemd-journal.so", + + // systemd >= 209 merged libsystemd-journal into libsystemd proper + "libsystemd.so.0", + "libsystemd.so", + } +) + +func getFunction(name string) (unsafe.Pointer, error) { + libsystemdMutex.Lock() + defer libsystemdMutex.Unlock() + + if libsystemdHandle == nil { + h, err := dlopen.GetHandle(libsystemdNames) + if err != nil { + return nil, err + } + + libsystemdHandle = h + } + + f, ok := libsystemdFunctions[name] + if !ok { + var err error + f, err = libsystemdHandle.GetSymbolPointer(name) + if err != nil { + return nil, err + } + + libsystemdFunctions[name] = f + } + + return f, nil +} diff --git a/vendor/github.com/coreos/go-systemd/sdjournal/journal.go b/vendor/github.com/coreos/go-systemd/sdjournal/journal.go new file mode 100644 index 000000000000..9f3d9234239b --- /dev/null +++ b/vendor/github.com/coreos/go-systemd/sdjournal/journal.go @@ -0,0 +1,1120 @@ +// Copyright 2015 RedHat, Inc. +// Copyright 2015 CoreOS, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package sdjournal provides a low-level Go interface to the +// systemd journal wrapped around the sd-journal C API. +// +// All public read methods map closely to the sd-journal API functions. See the +// sd-journal.h documentation[1] for information about each function. +// +// To write to the journal, see the pure-Go "journal" package +// +// [1] http://www.freedesktop.org/software/systemd/man/sd-journal.html +package sdjournal + +// #include +// #include +// #include +// #include +// +// int +// my_sd_journal_open(void *f, sd_journal **ret, int flags) +// { +// int (*sd_journal_open)(sd_journal **, int); +// +// sd_journal_open = f; +// return sd_journal_open(ret, flags); +// } +// +// int +// my_sd_journal_open_directory(void *f, sd_journal **ret, const char *path, int flags) +// { +// int (*sd_journal_open_directory)(sd_journal **, const char *, int); +// +// sd_journal_open_directory = f; +// return sd_journal_open_directory(ret, path, flags); +// } +// +// int +// my_sd_journal_open_files(void *f, sd_journal **ret, const char **paths, int flags) +// { +// int (*sd_journal_open_files)(sd_journal **, const char **, int); +// +// sd_journal_open_files = f; +// return sd_journal_open_files(ret, paths, flags); +// } +// +// void +// my_sd_journal_close(void *f, sd_journal *j) +// { +// int (*sd_journal_close)(sd_journal *); +// +// sd_journal_close = f; +// sd_journal_close(j); +// } +// +// int +// my_sd_journal_get_usage(void *f, sd_journal *j, uint64_t *bytes) +// { +// int (*sd_journal_get_usage)(sd_journal *, uint64_t *); +// +// sd_journal_get_usage = f; +// return sd_journal_get_usage(j, bytes); +// } +// +// int +// my_sd_journal_add_match(void *f, sd_journal *j, const void *data, size_t size) +// { +// int (*sd_journal_add_match)(sd_journal *, const void *, size_t); +// +// sd_journal_add_match = f; +// return sd_journal_add_match(j, data, size); +// } +// +// int +// my_sd_journal_add_disjunction(void *f, sd_journal *j) +// { +// int (*sd_journal_add_disjunction)(sd_journal *); +// +// sd_journal_add_disjunction = f; +// return sd_journal_add_disjunction(j); +// } +// +// int +// my_sd_journal_add_conjunction(void *f, sd_journal *j) +// { +// int (*sd_journal_add_conjunction)(sd_journal *); +// +// sd_journal_add_conjunction = f; +// return sd_journal_add_conjunction(j); +// } +// +// void +// my_sd_journal_flush_matches(void *f, sd_journal *j) +// { +// int (*sd_journal_flush_matches)(sd_journal *); +// +// sd_journal_flush_matches = f; +// sd_journal_flush_matches(j); +// } +// +// int +// my_sd_journal_next(void *f, sd_journal *j) +// { +// int (*sd_journal_next)(sd_journal *); +// +// sd_journal_next = f; +// return sd_journal_next(j); +// } +// +// int +// my_sd_journal_next_skip(void *f, sd_journal *j, uint64_t skip) +// { +// int (*sd_journal_next_skip)(sd_journal *, uint64_t); +// +// sd_journal_next_skip = f; +// return sd_journal_next_skip(j, skip); +// } +// +// int +// my_sd_journal_previous(void *f, sd_journal *j) +// { +// int (*sd_journal_previous)(sd_journal *); +// +// sd_journal_previous = f; +// return sd_journal_previous(j); +// } +// +// int +// my_sd_journal_previous_skip(void *f, sd_journal *j, uint64_t skip) +// { +// int (*sd_journal_previous_skip)(sd_journal *, uint64_t); +// +// sd_journal_previous_skip = f; +// return sd_journal_previous_skip(j, skip); +// } +// +// int +// my_sd_journal_get_data(void *f, sd_journal *j, const char *field, const void **data, size_t *length) +// { +// int (*sd_journal_get_data)(sd_journal *, const char *, const void **, size_t *); +// +// sd_journal_get_data = f; +// return sd_journal_get_data(j, field, data, length); +// } +// +// int +// my_sd_journal_set_data_threshold(void *f, sd_journal *j, size_t sz) +// { +// int (*sd_journal_set_data_threshold)(sd_journal *, size_t); +// +// sd_journal_set_data_threshold = f; +// return sd_journal_set_data_threshold(j, sz); +// } +// +// int +// my_sd_journal_get_cursor(void *f, sd_journal *j, char **cursor) +// { +// int (*sd_journal_get_cursor)(sd_journal *, char **); +// +// sd_journal_get_cursor = f; +// return sd_journal_get_cursor(j, cursor); +// } +// +// int +// my_sd_journal_test_cursor(void *f, sd_journal *j, const char *cursor) +// { +// int (*sd_journal_test_cursor)(sd_journal *, const char *); +// +// sd_journal_test_cursor = f; +// return sd_journal_test_cursor(j, cursor); +// } +// +// int +// my_sd_journal_get_realtime_usec(void *f, sd_journal *j, uint64_t *usec) +// { +// int (*sd_journal_get_realtime_usec)(sd_journal *, uint64_t *); +// +// sd_journal_get_realtime_usec = f; +// return sd_journal_get_realtime_usec(j, usec); +// } +// +// int +// my_sd_journal_get_monotonic_usec(void *f, sd_journal *j, uint64_t *usec, sd_id128_t *boot_id) +// { +// int (*sd_journal_get_monotonic_usec)(sd_journal *, uint64_t *, sd_id128_t *); +// +// sd_journal_get_monotonic_usec = f; +// return sd_journal_get_monotonic_usec(j, usec, boot_id); +// } +// +// int +// my_sd_journal_seek_head(void *f, sd_journal *j) +// { +// int (*sd_journal_seek_head)(sd_journal *); +// +// sd_journal_seek_head = f; +// return sd_journal_seek_head(j); +// } +// +// int +// my_sd_journal_seek_tail(void *f, sd_journal *j) +// { +// int (*sd_journal_seek_tail)(sd_journal *); +// +// sd_journal_seek_tail = f; +// return sd_journal_seek_tail(j); +// } +// +// +// int +// my_sd_journal_seek_cursor(void *f, sd_journal *j, const char *cursor) +// { +// int (*sd_journal_seek_cursor)(sd_journal *, const char *); +// +// sd_journal_seek_cursor = f; +// return sd_journal_seek_cursor(j, cursor); +// } +// +// int +// my_sd_journal_seek_realtime_usec(void *f, sd_journal *j, uint64_t usec) +// { +// int (*sd_journal_seek_realtime_usec)(sd_journal *, uint64_t); +// +// sd_journal_seek_realtime_usec = f; +// return sd_journal_seek_realtime_usec(j, usec); +// } +// +// int +// my_sd_journal_wait(void *f, sd_journal *j, uint64_t timeout_usec) +// { +// int (*sd_journal_wait)(sd_journal *, uint64_t); +// +// sd_journal_wait = f; +// return sd_journal_wait(j, timeout_usec); +// } +// +// void +// my_sd_journal_restart_data(void *f, sd_journal *j) +// { +// void (*sd_journal_restart_data)(sd_journal *); +// +// sd_journal_restart_data = f; +// sd_journal_restart_data(j); +// } +// +// int +// my_sd_journal_enumerate_data(void *f, sd_journal *j, const void **data, size_t *length) +// { +// int (*sd_journal_enumerate_data)(sd_journal *, const void **, size_t *); +// +// sd_journal_enumerate_data = f; +// return sd_journal_enumerate_data(j, data, length); +// } +// +// int +// my_sd_journal_query_unique(void *f, sd_journal *j, const char *field) +// { +// int(*sd_journal_query_unique)(sd_journal *, const char *); +// +// sd_journal_query_unique = f; +// return sd_journal_query_unique(j, field); +// } +// +// int +// my_sd_journal_enumerate_unique(void *f, sd_journal *j, const void **data, size_t *length) +// { +// int(*sd_journal_enumerate_unique)(sd_journal *, const void **, size_t *); +// +// sd_journal_enumerate_unique = f; +// return sd_journal_enumerate_unique(j, data, length); +// } +// +// void +// my_sd_journal_restart_unique(void *f, sd_journal *j) +// { +// void(*sd_journal_restart_unique)(sd_journal *); +// +// sd_journal_restart_unique = f; +// sd_journal_restart_unique(j); +// } +// +// int +// my_sd_journal_get_catalog(void *f, sd_journal *j, char **ret) +// { +// int(*sd_journal_get_catalog)(sd_journal *, char **); +// +// sd_journal_get_catalog = f; +// return sd_journal_get_catalog(j, ret); +// } +// +import "C" +import ( + "bytes" + "errors" + "fmt" + "strings" + "sync" + "syscall" + "time" + "unsafe" +) + +// Journal entry field strings which correspond to: +// http://www.freedesktop.org/software/systemd/man/systemd.journal-fields.html +const ( + // User Journal Fields + SD_JOURNAL_FIELD_MESSAGE = "MESSAGE" + SD_JOURNAL_FIELD_MESSAGE_ID = "MESSAGE_ID" + SD_JOURNAL_FIELD_PRIORITY = "PRIORITY" + SD_JOURNAL_FIELD_CODE_FILE = "CODE_FILE" + SD_JOURNAL_FIELD_CODE_LINE = "CODE_LINE" + SD_JOURNAL_FIELD_CODE_FUNC = "CODE_FUNC" + SD_JOURNAL_FIELD_ERRNO = "ERRNO" + SD_JOURNAL_FIELD_SYSLOG_FACILITY = "SYSLOG_FACILITY" + SD_JOURNAL_FIELD_SYSLOG_IDENTIFIER = "SYSLOG_IDENTIFIER" + SD_JOURNAL_FIELD_SYSLOG_PID = "SYSLOG_PID" + + // Trusted Journal Fields + SD_JOURNAL_FIELD_PID = "_PID" + SD_JOURNAL_FIELD_UID = "_UID" + SD_JOURNAL_FIELD_GID = "_GID" + SD_JOURNAL_FIELD_COMM = "_COMM" + SD_JOURNAL_FIELD_EXE = "_EXE" + SD_JOURNAL_FIELD_CMDLINE = "_CMDLINE" + SD_JOURNAL_FIELD_CAP_EFFECTIVE = "_CAP_EFFECTIVE" + SD_JOURNAL_FIELD_AUDIT_SESSION = "_AUDIT_SESSION" + SD_JOURNAL_FIELD_AUDIT_LOGINUID = "_AUDIT_LOGINUID" + SD_JOURNAL_FIELD_SYSTEMD_CGROUP = "_SYSTEMD_CGROUP" + SD_JOURNAL_FIELD_SYSTEMD_SESSION = "_SYSTEMD_SESSION" + SD_JOURNAL_FIELD_SYSTEMD_UNIT = "_SYSTEMD_UNIT" + SD_JOURNAL_FIELD_SYSTEMD_USER_UNIT = "_SYSTEMD_USER_UNIT" + SD_JOURNAL_FIELD_SYSTEMD_OWNER_UID = "_SYSTEMD_OWNER_UID" + SD_JOURNAL_FIELD_SYSTEMD_SLICE = "_SYSTEMD_SLICE" + SD_JOURNAL_FIELD_SELINUX_CONTEXT = "_SELINUX_CONTEXT" + SD_JOURNAL_FIELD_SOURCE_REALTIME_TIMESTAMP = "_SOURCE_REALTIME_TIMESTAMP" + SD_JOURNAL_FIELD_BOOT_ID = "_BOOT_ID" + SD_JOURNAL_FIELD_MACHINE_ID = "_MACHINE_ID" + SD_JOURNAL_FIELD_HOSTNAME = "_HOSTNAME" + SD_JOURNAL_FIELD_TRANSPORT = "_TRANSPORT" + + // Address Fields + SD_JOURNAL_FIELD_CURSOR = "__CURSOR" + SD_JOURNAL_FIELD_REALTIME_TIMESTAMP = "__REALTIME_TIMESTAMP" + SD_JOURNAL_FIELD_MONOTONIC_TIMESTAMP = "__MONOTONIC_TIMESTAMP" +) + +// Journal event constants +const ( + SD_JOURNAL_NOP = int(C.SD_JOURNAL_NOP) + SD_JOURNAL_APPEND = int(C.SD_JOURNAL_APPEND) + SD_JOURNAL_INVALIDATE = int(C.SD_JOURNAL_INVALIDATE) +) + +const ( + // IndefiniteWait is a sentinel value that can be passed to + // sdjournal.Wait() to signal an indefinite wait for new journal + // events. It is implemented as the maximum value for a time.Duration: + // https://github.com/golang/go/blob/e4dcf5c8c22d98ac9eac7b9b226596229624cb1d/src/time/time.go#L434 + IndefiniteWait time.Duration = 1<<63 - 1 +) + +var ( + // ErrNoTestCursor gets returned when using TestCursor function and cursor + // parameter is not the same as the current cursor position. + ErrNoTestCursor = errors.New("Cursor parameter is not the same as current position") +) + +// Journal is a Go wrapper of an sd_journal structure. +type Journal struct { + cjournal *C.sd_journal + mu sync.Mutex +} + +// JournalEntry represents all fields of a journal entry plus address fields. +type JournalEntry struct { + Fields map[string]string + Cursor string + RealtimeTimestamp uint64 + MonotonicTimestamp uint64 +} + +// Match is a convenience wrapper to describe filters supplied to AddMatch. +type Match struct { + Field string + Value string +} + +// String returns a string representation of a Match suitable for use with AddMatch. +func (m *Match) String() string { + return m.Field + "=" + m.Value +} + +// NewJournal returns a new Journal instance pointing to the local journal +func NewJournal() (j *Journal, err error) { + j = &Journal{} + + sd_journal_open, err := getFunction("sd_journal_open") + if err != nil { + return nil, err + } + + r := C.my_sd_journal_open(sd_journal_open, &j.cjournal, C.SD_JOURNAL_LOCAL_ONLY) + + if r < 0 { + return nil, fmt.Errorf("failed to open journal: %d", syscall.Errno(-r)) + } + + return j, nil +} + +// NewJournalFromDir returns a new Journal instance pointing to a journal residing +// in a given directory. +func NewJournalFromDir(path string) (j *Journal, err error) { + j = &Journal{} + + sd_journal_open_directory, err := getFunction("sd_journal_open_directory") + if err != nil { + return nil, err + } + + p := C.CString(path) + defer C.free(unsafe.Pointer(p)) + + r := C.my_sd_journal_open_directory(sd_journal_open_directory, &j.cjournal, p, 0) + if r < 0 { + return nil, fmt.Errorf("failed to open journal in directory %q: %d", path, syscall.Errno(-r)) + } + + return j, nil +} + +// NewJournalFromFiles returns a new Journal instance pointing to a journals residing +// in a given files. +func NewJournalFromFiles(paths ...string) (j *Journal, err error) { + j = &Journal{} + + sd_journal_open_files, err := getFunction("sd_journal_open_files") + if err != nil { + return nil, err + } + + // by making the slice 1 elem too long, we guarantee it'll be null-terminated + cPaths := make([]*C.char, len(paths)+1) + for idx, path := range paths { + p := C.CString(path) + cPaths[idx] = p + defer C.free(unsafe.Pointer(p)) + } + + r := C.my_sd_journal_open_files(sd_journal_open_files, &j.cjournal, &cPaths[0], 0) + if r < 0 { + return nil, fmt.Errorf("failed to open journals in paths %q: %d", paths, syscall.Errno(-r)) + } + + return j, nil +} + +// Close closes a journal opened with NewJournal. +func (j *Journal) Close() error { + sd_journal_close, err := getFunction("sd_journal_close") + if err != nil { + return err + } + + j.mu.Lock() + C.my_sd_journal_close(sd_journal_close, j.cjournal) + j.mu.Unlock() + + return nil +} + +// AddMatch adds a match by which to filter the entries of the journal. +func (j *Journal) AddMatch(match string) error { + sd_journal_add_match, err := getFunction("sd_journal_add_match") + if err != nil { + return err + } + + m := C.CString(match) + defer C.free(unsafe.Pointer(m)) + + j.mu.Lock() + r := C.my_sd_journal_add_match(sd_journal_add_match, j.cjournal, unsafe.Pointer(m), C.size_t(len(match))) + j.mu.Unlock() + + if r < 0 { + return fmt.Errorf("failed to add match: %d", syscall.Errno(-r)) + } + + return nil +} + +// AddDisjunction inserts a logical OR in the match list. +func (j *Journal) AddDisjunction() error { + sd_journal_add_disjunction, err := getFunction("sd_journal_add_disjunction") + if err != nil { + return err + } + + j.mu.Lock() + r := C.my_sd_journal_add_disjunction(sd_journal_add_disjunction, j.cjournal) + j.mu.Unlock() + + if r < 0 { + return fmt.Errorf("failed to add a disjunction in the match list: %d", syscall.Errno(-r)) + } + + return nil +} + +// AddConjunction inserts a logical AND in the match list. +func (j *Journal) AddConjunction() error { + sd_journal_add_conjunction, err := getFunction("sd_journal_add_conjunction") + if err != nil { + return err + } + + j.mu.Lock() + r := C.my_sd_journal_add_conjunction(sd_journal_add_conjunction, j.cjournal) + j.mu.Unlock() + + if r < 0 { + return fmt.Errorf("failed to add a conjunction in the match list: %d", syscall.Errno(-r)) + } + + return nil +} + +// FlushMatches flushes all matches, disjunctions and conjunctions. +func (j *Journal) FlushMatches() { + sd_journal_flush_matches, err := getFunction("sd_journal_flush_matches") + if err != nil { + return + } + + j.mu.Lock() + C.my_sd_journal_flush_matches(sd_journal_flush_matches, j.cjournal) + j.mu.Unlock() +} + +// Next advances the read pointer into the journal by one entry. +func (j *Journal) Next() (uint64, error) { + sd_journal_next, err := getFunction("sd_journal_next") + if err != nil { + return 0, err + } + + j.mu.Lock() + r := C.my_sd_journal_next(sd_journal_next, j.cjournal) + j.mu.Unlock() + + if r < 0 { + return 0, fmt.Errorf("failed to iterate journal: %d", syscall.Errno(-r)) + } + + return uint64(r), nil +} + +// NextSkip advances the read pointer by multiple entries at once, +// as specified by the skip parameter. +func (j *Journal) NextSkip(skip uint64) (uint64, error) { + sd_journal_next_skip, err := getFunction("sd_journal_next_skip") + if err != nil { + return 0, err + } + + j.mu.Lock() + r := C.my_sd_journal_next_skip(sd_journal_next_skip, j.cjournal, C.uint64_t(skip)) + j.mu.Unlock() + + if r < 0 { + return 0, fmt.Errorf("failed to iterate journal: %d", syscall.Errno(-r)) + } + + return uint64(r), nil +} + +// Previous sets the read pointer into the journal back by one entry. +func (j *Journal) Previous() (uint64, error) { + sd_journal_previous, err := getFunction("sd_journal_previous") + if err != nil { + return 0, err + } + + j.mu.Lock() + r := C.my_sd_journal_previous(sd_journal_previous, j.cjournal) + j.mu.Unlock() + + if r < 0 { + return 0, fmt.Errorf("failed to iterate journal: %d", syscall.Errno(-r)) + } + + return uint64(r), nil +} + +// PreviousSkip sets back the read pointer by multiple entries at once, +// as specified by the skip parameter. +func (j *Journal) PreviousSkip(skip uint64) (uint64, error) { + sd_journal_previous_skip, err := getFunction("sd_journal_previous_skip") + if err != nil { + return 0, err + } + + j.mu.Lock() + r := C.my_sd_journal_previous_skip(sd_journal_previous_skip, j.cjournal, C.uint64_t(skip)) + j.mu.Unlock() + + if r < 0 { + return 0, fmt.Errorf("failed to iterate journal: %d", syscall.Errno(-r)) + } + + return uint64(r), nil +} + +func (j *Journal) getData(field string) (unsafe.Pointer, C.int, error) { + sd_journal_get_data, err := getFunction("sd_journal_get_data") + if err != nil { + return nil, 0, err + } + + f := C.CString(field) + defer C.free(unsafe.Pointer(f)) + + var d unsafe.Pointer + var l C.size_t + + j.mu.Lock() + r := C.my_sd_journal_get_data(sd_journal_get_data, j.cjournal, f, &d, &l) + j.mu.Unlock() + + if r < 0 { + return nil, 0, fmt.Errorf("failed to read message: %d", syscall.Errno(-r)) + } + + return d, C.int(l), nil +} + +// GetData gets the data object associated with a specific field from the +// the journal entry referenced by the last completed Next/Previous function +// call. To call GetData, you must have first called one of these functions. +func (j *Journal) GetData(field string) (string, error) { + d, l, err := j.getData(field) + if err != nil { + return "", err + } + + return C.GoStringN((*C.char)(d), l), nil +} + +// GetDataValue gets the data object associated with a specific field from the +// journal entry referenced by the last completed Next/Previous function call, +// returning only the value of the object. To call GetDataValue, you must first +// have called one of the Next/Previous functions. +func (j *Journal) GetDataValue(field string) (string, error) { + val, err := j.GetData(field) + if err != nil { + return "", err + } + + return strings.SplitN(val, "=", 2)[1], nil +} + +// GetDataBytes gets the data object associated with a specific field from the +// journal entry referenced by the last completed Next/Previous function call. +// To call GetDataBytes, you must first have called one of these functions. +func (j *Journal) GetDataBytes(field string) ([]byte, error) { + d, l, err := j.getData(field) + if err != nil { + return nil, err + } + + return C.GoBytes(d, l), nil +} + +// GetDataValueBytes gets the data object associated with a specific field from the +// journal entry referenced by the last completed Next/Previous function call, +// returning only the value of the object. To call GetDataValueBytes, you must first +// have called one of the Next/Previous functions. +func (j *Journal) GetDataValueBytes(field string) ([]byte, error) { + val, err := j.GetDataBytes(field) + if err != nil { + return nil, err + } + + return bytes.SplitN(val, []byte("="), 2)[1], nil +} + +// GetEntry returns a full representation of the journal entry referenced by the +// last completed Next/Previous function call, with all key-value pairs of data +// as well as address fields (cursor, realtime timestamp and monotonic timestamp). +// To call GetEntry, you must first have called one of the Next/Previous functions. +func (j *Journal) GetEntry() (*JournalEntry, error) { + sd_journal_get_realtime_usec, err := getFunction("sd_journal_get_realtime_usec") + if err != nil { + return nil, err + } + + sd_journal_get_monotonic_usec, err := getFunction("sd_journal_get_monotonic_usec") + if err != nil { + return nil, err + } + + sd_journal_get_cursor, err := getFunction("sd_journal_get_cursor") + if err != nil { + return nil, err + } + + sd_journal_restart_data, err := getFunction("sd_journal_restart_data") + if err != nil { + return nil, err + } + + sd_journal_enumerate_data, err := getFunction("sd_journal_enumerate_data") + if err != nil { + return nil, err + } + + j.mu.Lock() + defer j.mu.Unlock() + + var r C.int + entry := &JournalEntry{Fields: make(map[string]string)} + + var realtimeUsec C.uint64_t + r = C.my_sd_journal_get_realtime_usec(sd_journal_get_realtime_usec, j.cjournal, &realtimeUsec) + if r < 0 { + return nil, fmt.Errorf("failed to get realtime timestamp: %d", syscall.Errno(-r)) + } + + entry.RealtimeTimestamp = uint64(realtimeUsec) + + var monotonicUsec C.uint64_t + var boot_id C.sd_id128_t + + r = C.my_sd_journal_get_monotonic_usec(sd_journal_get_monotonic_usec, j.cjournal, &monotonicUsec, &boot_id) + if r < 0 { + return nil, fmt.Errorf("failed to get monotonic timestamp: %d", syscall.Errno(-r)) + } + + entry.MonotonicTimestamp = uint64(monotonicUsec) + + var c *C.char + // since the pointer is mutated by sd_journal_get_cursor, need to wait + // until after the call to free the memory + r = C.my_sd_journal_get_cursor(sd_journal_get_cursor, j.cjournal, &c) + defer C.free(unsafe.Pointer(c)) + if r < 0 { + return nil, fmt.Errorf("failed to get cursor: %d", syscall.Errno(-r)) + } + + entry.Cursor = C.GoString(c) + + // Implements the JOURNAL_FOREACH_DATA_RETVAL macro from journal-internal.h + var d unsafe.Pointer + var l C.size_t + C.my_sd_journal_restart_data(sd_journal_restart_data, j.cjournal) + for { + r = C.my_sd_journal_enumerate_data(sd_journal_enumerate_data, j.cjournal, &d, &l) + if r == 0 { + break + } + + if r < 0 { + return nil, fmt.Errorf("failed to read message field: %d", syscall.Errno(-r)) + } + + msg := C.GoStringN((*C.char)(d), C.int(l)) + kv := strings.SplitN(msg, "=", 2) + if len(kv) < 2 { + return nil, fmt.Errorf("failed to parse field") + } + + entry.Fields[kv[0]] = kv[1] + } + + return entry, nil +} + +// SetDataThreshold sets the data field size threshold for data returned by +// GetData. To retrieve the complete data fields this threshold should be +// turned off by setting it to 0, so that the library always returns the +// complete data objects. +func (j *Journal) SetDataThreshold(threshold uint64) error { + sd_journal_set_data_threshold, err := getFunction("sd_journal_set_data_threshold") + if err != nil { + return err + } + + j.mu.Lock() + r := C.my_sd_journal_set_data_threshold(sd_journal_set_data_threshold, j.cjournal, C.size_t(threshold)) + j.mu.Unlock() + + if r < 0 { + return fmt.Errorf("failed to set data threshold: %d", syscall.Errno(-r)) + } + + return nil +} + +// GetRealtimeUsec gets the realtime (wallclock) timestamp of the journal +// entry referenced by the last completed Next/Previous function call. To +// call GetRealtimeUsec, you must first have called one of the Next/Previous +// functions. +func (j *Journal) GetRealtimeUsec() (uint64, error) { + var usec C.uint64_t + + sd_journal_get_realtime_usec, err := getFunction("sd_journal_get_realtime_usec") + if err != nil { + return 0, err + } + + j.mu.Lock() + r := C.my_sd_journal_get_realtime_usec(sd_journal_get_realtime_usec, j.cjournal, &usec) + j.mu.Unlock() + + if r < 0 { + return 0, fmt.Errorf("failed to get realtime timestamp: %d", syscall.Errno(-r)) + } + + return uint64(usec), nil +} + +// GetMonotonicUsec gets the monotonic timestamp of the journal entry +// referenced by the last completed Next/Previous function call. To call +// GetMonotonicUsec, you must first have called one of the Next/Previous +// functions. +func (j *Journal) GetMonotonicUsec() (uint64, error) { + var usec C.uint64_t + var boot_id C.sd_id128_t + + sd_journal_get_monotonic_usec, err := getFunction("sd_journal_get_monotonic_usec") + if err != nil { + return 0, err + } + + j.mu.Lock() + r := C.my_sd_journal_get_monotonic_usec(sd_journal_get_monotonic_usec, j.cjournal, &usec, &boot_id) + j.mu.Unlock() + + if r < 0 { + return 0, fmt.Errorf("failed to get monotonic timestamp: %d", syscall.Errno(-r)) + } + + return uint64(usec), nil +} + +// GetCursor gets the cursor of the last journal entry reeferenced by the +// last completed Next/Previous function call. To call GetCursor, you must +// first have called one of the Next/Previous functions. +func (j *Journal) GetCursor() (string, error) { + sd_journal_get_cursor, err := getFunction("sd_journal_get_cursor") + if err != nil { + return "", err + } + + var d *C.char + // since the pointer is mutated by sd_journal_get_cursor, need to wait + // until after the call to free the memory + + j.mu.Lock() + r := C.my_sd_journal_get_cursor(sd_journal_get_cursor, j.cjournal, &d) + j.mu.Unlock() + defer C.free(unsafe.Pointer(d)) + + if r < 0 { + return "", fmt.Errorf("failed to get cursor: %d", syscall.Errno(-r)) + } + + cursor := C.GoString(d) + + return cursor, nil +} + +// TestCursor checks whether the current position in the journal matches the +// specified cursor +func (j *Journal) TestCursor(cursor string) error { + sd_journal_test_cursor, err := getFunction("sd_journal_test_cursor") + if err != nil { + return err + } + + c := C.CString(cursor) + defer C.free(unsafe.Pointer(c)) + + j.mu.Lock() + r := C.my_sd_journal_test_cursor(sd_journal_test_cursor, j.cjournal, c) + j.mu.Unlock() + + if r < 0 { + return fmt.Errorf("failed to test to cursor %q: %d", cursor, syscall.Errno(-r)) + } else if r == 0 { + return ErrNoTestCursor + } + + return nil +} + +// SeekHead seeks to the beginning of the journal, i.e. the oldest available +// entry. This call must be followed by a call to Next before any call to +// Get* will return data about the first element. +func (j *Journal) SeekHead() error { + sd_journal_seek_head, err := getFunction("sd_journal_seek_head") + if err != nil { + return err + } + + j.mu.Lock() + r := C.my_sd_journal_seek_head(sd_journal_seek_head, j.cjournal) + j.mu.Unlock() + + if r < 0 { + return fmt.Errorf("failed to seek to head of journal: %d", syscall.Errno(-r)) + } + + return nil +} + +// SeekTail may be used to seek to the end of the journal, i.e. the most recent +// available entry. This call must be followed by a call to Next before any +// call to Get* will return data about the last element. +func (j *Journal) SeekTail() error { + sd_journal_seek_tail, err := getFunction("sd_journal_seek_tail") + if err != nil { + return err + } + + j.mu.Lock() + r := C.my_sd_journal_seek_tail(sd_journal_seek_tail, j.cjournal) + j.mu.Unlock() + + if r < 0 { + return fmt.Errorf("failed to seek to tail of journal: %d", syscall.Errno(-r)) + } + + return nil +} + +// SeekRealtimeUsec seeks to the entry with the specified realtime (wallclock) +// timestamp, i.e. CLOCK_REALTIME. This call must be followed by a call to +// Next/Previous before any call to Get* will return data about the sought entry. +func (j *Journal) SeekRealtimeUsec(usec uint64) error { + sd_journal_seek_realtime_usec, err := getFunction("sd_journal_seek_realtime_usec") + if err != nil { + return err + } + + j.mu.Lock() + r := C.my_sd_journal_seek_realtime_usec(sd_journal_seek_realtime_usec, j.cjournal, C.uint64_t(usec)) + j.mu.Unlock() + + if r < 0 { + return fmt.Errorf("failed to seek to %d: %d", usec, syscall.Errno(-r)) + } + + return nil +} + +// SeekCursor seeks to a concrete journal cursor. This call must be +// followed by a call to Next/Previous before any call to Get* will return +// data about the sought entry. +func (j *Journal) SeekCursor(cursor string) error { + sd_journal_seek_cursor, err := getFunction("sd_journal_seek_cursor") + if err != nil { + return err + } + + c := C.CString(cursor) + defer C.free(unsafe.Pointer(c)) + + j.mu.Lock() + r := C.my_sd_journal_seek_cursor(sd_journal_seek_cursor, j.cjournal, c) + j.mu.Unlock() + + if r < 0 { + return fmt.Errorf("failed to seek to cursor %q: %d", cursor, syscall.Errno(-r)) + } + + return nil +} + +// Wait will synchronously wait until the journal gets changed. The maximum time +// this call sleeps may be controlled with the timeout parameter. If +// sdjournal.IndefiniteWait is passed as the timeout parameter, Wait will +// wait indefinitely for a journal change. +func (j *Journal) Wait(timeout time.Duration) int { + var to uint64 + + sd_journal_wait, err := getFunction("sd_journal_wait") + if err != nil { + return -1 + } + + if timeout == IndefiniteWait { + // sd_journal_wait(3) calls for a (uint64_t) -1 to be passed to signify + // indefinite wait, but using a -1 overflows our C.uint64_t, so we use an + // equivalent hex value. + to = 0xffffffffffffffff + } else { + to = uint64(timeout / time.Microsecond) + } + j.mu.Lock() + r := C.my_sd_journal_wait(sd_journal_wait, j.cjournal, C.uint64_t(to)) + j.mu.Unlock() + + return int(r) +} + +// GetUsage returns the journal disk space usage, in bytes. +func (j *Journal) GetUsage() (uint64, error) { + var out C.uint64_t + + sd_journal_get_usage, err := getFunction("sd_journal_get_usage") + if err != nil { + return 0, err + } + + j.mu.Lock() + r := C.my_sd_journal_get_usage(sd_journal_get_usage, j.cjournal, &out) + j.mu.Unlock() + + if r < 0 { + return 0, fmt.Errorf("failed to get journal disk space usage: %d", syscall.Errno(-r)) + } + + return uint64(out), nil +} + +// GetUniqueValues returns all unique values for a given field. +func (j *Journal) GetUniqueValues(field string) ([]string, error) { + var result []string + + sd_journal_query_unique, err := getFunction("sd_journal_query_unique") + if err != nil { + return nil, err + } + + sd_journal_enumerate_unique, err := getFunction("sd_journal_enumerate_unique") + if err != nil { + return nil, err + } + + sd_journal_restart_unique, err := getFunction("sd_journal_restart_unique") + if err != nil { + return nil, err + } + + j.mu.Lock() + defer j.mu.Unlock() + + f := C.CString(field) + defer C.free(unsafe.Pointer(f)) + + r := C.my_sd_journal_query_unique(sd_journal_query_unique, j.cjournal, f) + + if r < 0 { + return nil, fmt.Errorf("failed to query journal: %d", syscall.Errno(-r)) + } + + // Implements the SD_JOURNAL_FOREACH_UNIQUE macro from sd-journal.h + var d unsafe.Pointer + var l C.size_t + C.my_sd_journal_restart_unique(sd_journal_restart_unique, j.cjournal) + for { + r = C.my_sd_journal_enumerate_unique(sd_journal_enumerate_unique, j.cjournal, &d, &l) + if r == 0 { + break + } + + if r < 0 { + return nil, fmt.Errorf("failed to read message field: %d", syscall.Errno(-r)) + } + + msg := C.GoStringN((*C.char)(d), C.int(l)) + kv := strings.SplitN(msg, "=", 2) + if len(kv) < 2 { + return nil, fmt.Errorf("failed to parse field") + } + + result = append(result, kv[1]) + } + + return result, nil +} + +// GetCatalog retrieves a message catalog entry for the journal entry referenced +// by the last completed Next/Previous function call. To call GetCatalog, you +// must first have called one of these functions. +func (j *Journal) GetCatalog() (string, error) { + sd_journal_get_catalog, err := getFunction("sd_journal_get_catalog") + if err != nil { + return "", err + } + + var c *C.char + + j.mu.Lock() + r := C.my_sd_journal_get_catalog(sd_journal_get_catalog, j.cjournal, &c) + j.mu.Unlock() + defer C.free(unsafe.Pointer(c)) + + if r < 0 { + return "", fmt.Errorf("failed to retrieve catalog entry for current journal entry: %d", syscall.Errno(-r)) + } + + catalog := C.GoString(c) + + return catalog, nil +} diff --git a/vendor/github.com/coreos/go-systemd/sdjournal/read.go b/vendor/github.com/coreos/go-systemd/sdjournal/read.go new file mode 100644 index 000000000000..51a060fb530f --- /dev/null +++ b/vendor/github.com/coreos/go-systemd/sdjournal/read.go @@ -0,0 +1,272 @@ +// Copyright 2015 RedHat, Inc. +// Copyright 2015 CoreOS, Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package sdjournal + +import ( + "errors" + "fmt" + "io" + "log" + "strings" + "sync" + "time" +) + +var ( + // ErrExpired gets returned when the Follow function runs into the + // specified timeout. + ErrExpired = errors.New("Timeout expired") +) + +// JournalReaderConfig represents options to drive the behavior of a JournalReader. +type JournalReaderConfig struct { + // The Since, NumFromTail and Cursor options are mutually exclusive and + // determine where the reading begins within the journal. The order in which + // options are written is exactly the order of precedence. + Since time.Duration // start relative to a Duration from now + NumFromTail uint64 // start relative to the tail + Cursor string // start relative to the cursor + + // Show only journal entries whose fields match the supplied values. If + // the array is empty, entries will not be filtered. + Matches []Match + + // If not empty, the journal instance will point to a journal residing + // in this directory. The supplied path may be relative or absolute. + Path string + + // If not nil, Formatter will be used to translate the resulting entries + // into strings. If not set, the default format (timestamp and message field) + // will be used. If Formatter returns an error, Read will stop and return the error. + Formatter func(entry *JournalEntry) (string, error) +} + +// JournalReader is an io.ReadCloser which provides a simple interface for iterating through the +// systemd journal. A JournalReader is not safe for concurrent use by multiple goroutines. +type JournalReader struct { + journal *Journal + msgReader *strings.Reader + formatter func(entry *JournalEntry) (string, error) +} + +// NewJournalReader creates a new JournalReader with configuration options that are similar to the +// systemd journalctl tool's iteration and filtering features. +func NewJournalReader(config JournalReaderConfig) (*JournalReader, error) { + // use simpleMessageFormatter as default formatter. + if config.Formatter == nil { + config.Formatter = simpleMessageFormatter + } + + r := &JournalReader{ + formatter: config.Formatter, + } + + // Open the journal + var err error + if config.Path != "" { + r.journal, err = NewJournalFromDir(config.Path) + } else { + r.journal, err = NewJournal() + } + if err != nil { + return nil, err + } + + // Add any supplied matches + for _, m := range config.Matches { + if err = r.journal.AddMatch(m.String()); err != nil { + return nil, err + } + } + + // Set the start position based on options + if config.Since != 0 { + // Start based on a relative time + start := time.Now().Add(config.Since) + if err := r.journal.SeekRealtimeUsec(uint64(start.UnixNano() / 1000)); err != nil { + return nil, err + } + } else if config.NumFromTail != 0 { + // Start based on a number of lines before the tail + if err := r.journal.SeekTail(); err != nil { + return nil, err + } + + // Move the read pointer into position near the tail. Go one further than + // the option so that the initial cursor advancement positions us at the + // correct starting point. + skip, err := r.journal.PreviousSkip(config.NumFromTail + 1) + if err != nil { + return nil, err + } + // If we skipped fewer lines than expected, we have reached journal start. + // Thus, we seek to head so that next invocation can read the first line. + if skip != config.NumFromTail+1 { + if err := r.journal.SeekHead(); err != nil { + return nil, err + } + } + } else if config.Cursor != "" { + // Start based on a custom cursor + if err := r.journal.SeekCursor(config.Cursor); err != nil { + return nil, err + } + } + + return r, nil +} + +// Read reads entries from the journal. Read follows the Reader interface so +// it must be able to read a specific amount of bytes. Journald on the other +// hand only allows us to read full entries of arbitrary size (without byte +// granularity). JournalReader is therefore internally buffering entries that +// don't fit in the read buffer. Callers should keep calling until 0 and/or an +// error is returned. +func (r *JournalReader) Read(b []byte) (int, error) { + if r.msgReader == nil { + // Advance the journal cursor. It has to be called at least one time + // before reading + c, err := r.journal.Next() + + // An unexpected error + if err != nil { + return 0, err + } + + // EOF detection + if c == 0 { + return 0, io.EOF + } + + entry, err := r.journal.GetEntry() + if err != nil { + return 0, err + } + + // Build a message + msg, err := r.formatter(entry) + if err != nil { + return 0, err + } + r.msgReader = strings.NewReader(msg) + } + + // Copy and return the message + sz, err := r.msgReader.Read(b) + if err == io.EOF { + // The current entry has been fully read. Don't propagate this + // EOF, so the next entry can be read at the next Read() + // iteration. + r.msgReader = nil + return sz, nil + } + if err != nil { + return sz, err + } + if r.msgReader.Len() == 0 { + r.msgReader = nil + } + + return sz, nil +} + +// Close closes the JournalReader's handle to the journal. +func (r *JournalReader) Close() error { + return r.journal.Close() +} + +// Rewind attempts to rewind the JournalReader to the first entry. +func (r *JournalReader) Rewind() error { + r.msgReader = nil + return r.journal.SeekHead() +} + +// Follow synchronously follows the JournalReader, writing each new journal entry to writer. The +// follow will continue until a single time.Time is received on the until channel. +func (r *JournalReader) Follow(until <-chan time.Time, writer io.Writer) error { + + // Process journal entries and events. Entries are flushed until the tail or + // timeout is reached, and then we wait for new events or the timeout. + var msg = make([]byte, 64*1<<(10)) + var waitCh = make(chan int, 1) + var waitGroup sync.WaitGroup + defer waitGroup.Wait() + +process: + for { + c, err := r.Read(msg) + if err != nil && err != io.EOF { + return err + } + + select { + case <-until: + return ErrExpired + default: + } + if c > 0 { + if _, err = writer.Write(msg[:c]); err != nil { + return err + } + continue process + } + + // We're at the tail, so wait for new events or time out. + // Holds journal events to process. Tightly bounded for now unless there's a + // reason to unblock the journal watch routine more quickly. + for { + waitGroup.Add(1) + go func() { + status := r.journal.Wait(100 * time.Millisecond) + waitCh <- status + waitGroup.Done() + }() + + select { + case <-until: + return ErrExpired + case e := <-waitCh: + switch e { + case SD_JOURNAL_NOP: + // the journal did not change since the last invocation + case SD_JOURNAL_APPEND, SD_JOURNAL_INVALIDATE: + continue process + default: + if e < 0 { + return fmt.Errorf("received error event: %d", e) + } + + log.Printf("received unknown event: %d\n", e) + } + } + } + } +} + +// simpleMessageFormatter is the default formatter. +// It returns a string representing the current journal entry in a simple format which +// includes the entry timestamp and MESSAGE field. +func simpleMessageFormatter(entry *JournalEntry) (string, error) { + msg, ok := entry.Fields["MESSAGE"] + if !ok { + return "", fmt.Errorf("no MESSAGE field present in journal entry") + } + + usec := entry.RealtimeTimestamp + timestamp := time.Unix(0, int64(usec)*int64(time.Microsecond)) + + return fmt.Sprintf("%s %s\n", timestamp, msg), nil +} diff --git a/vendor/github.com/coreos/pkg/CONTRIBUTING.md b/vendor/github.com/coreos/pkg/CONTRIBUTING.md new file mode 100644 index 000000000000..6662073a848e --- /dev/null +++ b/vendor/github.com/coreos/pkg/CONTRIBUTING.md @@ -0,0 +1,71 @@ +# How to Contribute + +CoreOS projects are [Apache 2.0 licensed](LICENSE) and accept contributions via +GitHub pull requests. This document outlines some of the conventions on +development workflow, commit message formatting, contact points and other +resources to make it easier to get your contribution accepted. + +# Certificate of Origin + +By contributing to this project you agree to the Developer Certificate of +Origin (DCO). This document was created by the Linux Kernel community and is a +simple statement that you, as a contributor, have the legal right to make the +contribution. See the [DCO](DCO) file for details. + +# Email and Chat + +The project currently uses the general CoreOS email list and IRC channel: +- Email: [coreos-dev](https://groups.google.com/forum/#!forum/coreos-dev) +- IRC: #[coreos](irc://irc.freenode.org:6667/#coreos) IRC channel on freenode.org + +Please avoid emailing maintainers found in the MAINTAINERS file directly. They +are very busy and read the mailing lists. + +## Getting Started + +- Fork the repository on GitHub +- Read the [README](README.md) for build and test instructions +- Play with the project, submit bugs, submit patches! + +## Contribution Flow + +This is a rough outline of what a contributor's workflow looks like: + +- Create a topic branch from where you want to base your work (usually master). +- Make commits of logical units. +- Make sure your commit messages are in the proper format (see below). +- Push your changes to a topic branch in your fork of the repository. +- Make sure the tests pass, and add any new tests as appropriate. +- Submit a pull request to the original repository. + +Thanks for your contributions! + +### Format of the Commit Message + +We follow a rough convention for commit messages that is designed to answer two +questions: what changed and why. The subject line should feature the what and +the body of the commit should describe the why. + +``` +scripts: add the test-cluster command + +this uses tmux to setup a test cluster that you can easily kill and +start for debugging. + +Fixes #38 +``` + +The format can be described more formally as follows: + +``` +: + + + +