Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic merge from master to 6.x branch #5986

Merged
merged 39 commits into from
Jan 4, 2018

Conversation

tsg
Copy link
Contributor

@tsg tsg commented Jan 4, 2018

There were no conflicts besides the versions.

7AC and others added 30 commits December 18, 2017 12:45
)

elasticsearch: show event being indexed in case of failure
…#5888)

* Add exclude_files option to auditbeat file (elastic#5342)

Added a new config field, `exclude_files` consisting of regular expressions
used to exclude files from monitoring by the file integrity module.

* Sample configuration and docs

* Expose Metricbeat's testing CapturingReporterV2

* Added a test-case for excluded files

* Rely on ucfg unpacker to populate regexes

* Perform file exclusion earlier to reduce impact

* Document full path regexp

* RunPushMetricSetV2 waits for a number of events

* Fix godoc
Builds that set TEST_ENVIRONMENT=0 should be able to override this
value in order to avoid Docker usage.
This refactors logp and adds support for structured logging. logp uses github.com/uber-go/zap in its implementation. There are no changes to the user facing logging configuration. The logger output will have some format differences, but in general will have a more consistent format across outputs.

Here is some sample output taken from the `TestLogger` test case.

```
=== RUN   TestLogger
2017-12-17T19:48:16.374-0500    INFO    logp/core_test.go:13    unnamed global logger
2017-12-17T19:48:16.374-0500    INFO    [example]       logp/core_test.go:16    some message
2017-12-17T19:48:16.374-0500    INFO    [example]       logp/core_test.go:17    some message with parameter x=1, y=2
2017-12-17T19:48:16.374-0500    INFO    [example]       logp/core_test.go:18    some message    {"x": 1, "y": 2}
2017-12-17T19:48:16.374-0500    INFO    [example]       logp/core_test.go:19    some message    {"x": 1}
2017-12-17T19:48:16.374-0500    INFO    [example]       logp/core_test.go:20    some message with namespaced args       {"metrics": {"x": 1, "y": 1}}
2017-12-17T19:48:16.374-0500    INFO    [example]       logp/core_test.go:21            {"empty_message": true}
2017-12-17T19:48:16.374-0500    WARN    [example]       logp/core_test.go:24    logger with context     {"x": 1, "y": 2}
2017-12-17T19:48:16.374-0500    INFO    [example]       logp/core_test.go:30    some message with struct value  {"metrics": {"x":1,"y":2}}
{"level":"info","timestamp":"2017-12-17T19:48:16.374-0500","caller":"logp/core_test.go:13","message":"unnamed global logger"}
{"level":"info","timestamp":"2017-12-17T19:48:16.374-0500","logger":"example","caller":"logp/core_test.go:16","message":"some message"}
{"level":"info","timestamp":"2017-12-17T19:48:16.374-0500","logger":"example","caller":"logp/core_test.go:17","message":"some message with parameter x=1, y=2"}
{"level":"info","timestamp":"2017-12-17T19:48:16.374-0500","logger":"example","caller":"logp/core_test.go:18","message":"some message","x":1,"y":2}
{"level":"info","timestamp":"2017-12-17T19:48:16.374-0500","logger":"example","caller":"logp/core_test.go:19","message":"some message","x":1}
{"level":"info","timestamp":"2017-12-17T19:48:16.374-0500","logger":"example","caller":"logp/core_test.go:20","message":"some message with namespaced args","metrics":{"x":1,"y":1}}
{"level":"info","timestamp":"2017-12-17T19:48:16.374-0500","logger":"example","caller":"logp/core_test.go:21","message":"","empty_message":true}
{"level":"warn","timestamp":"2017-12-17T19:48:16.374-0500","logger":"example","caller":"logp/core_test.go:24","message":"logger with context","x":1,"y":2}
{"level":"info","timestamp":"2017-12-17T19:48:16.374-0500","logger":"example","caller":"logp/core_test.go:30","message":"some message with struct value","metrics":{"x":1,"y":2}}
```

Any test code that calls `logp.LogInit()` in libbeat needs to be updated to use `logp.TestingSetup()`.
…correctly (elastic#5902)

Use `zk_open_file_descriptor_count` instead of `open_file_descriptor_count` to check whether the zookeeper is running on Unix platforms.

Because the output variable is `zk_open_file_descriptor_count` depending on the doc [ZooKeeper Administrator's Guide](http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_zkCommands)
Avoid failure based on missing files or setup added by running `make
update`.
Closes elastic/apm-server#355.
This refactor makes it possible to reuse code in auditbeat
without depending on filebeat.
Files are open using sharing flags for read, write and delete when
hashing and resolving their owner.
- Use structured logging for the metrics that are periodically logged.

- Add beat.info.uptime.ms to the list of gauges so that the total value is always reported rather than a difference.

- Made a change to ensure that only non-zero counter values are reported at shutdown (this was bug introduced in my last refactoring). Note that zero-value gauges are reported which kind of makes the "Total non-zero metrics" message misleading.

Log samples:

```
2017-12-18T13:07:50.311-0500	INFO	[monitoring]	log/log.go:80	Starting metrics logging every 5s
2017-12-18T13:07:55.312-0500	INFO	[monitoring]	log/log.go:107	Non-zero metrics in the last 5s	{"monitoring": {"metrics": {"beat.info.uptime.ms": 5018, "beat.memstats.gc_next": 5089280, "beat.memstats.memory_alloc": 2587160, "beat.memstats.memory_total": 190951808, "libbeat.config.module.running": 4, "libbeat.config.module.starts": 4, "libbeat.config.reloads": 1, "libbeat.output.type": "elasticsearch", "libbeat.pipeline.clients": 8, "libbeat.pipeline.events.active": 41, "libbeat.pipeline.events.filtered": 1, "libbeat.pipeline.events.published": 41, "libbeat.pipeline.events.retry": 82, "libbeat.pipeline.events.total": 42, "metricbeat.docker.info.events": 1, "metricbeat.docker.info.success": 1, "metricbeat.system.cpu.events": 1, "metricbeat.system.cpu.success": 1, "metricbeat.system.filesystem.events": 5, "metricbeat.system.filesystem.success": 5, "metricbeat.system.fsstat.events": 1, "metricbeat.system.fsstat.success": 1, "metricbeat.system.load.events": 1, "metricbeat.system.load.success": 1, "metricbeat.system.memory.events": 1, "metricbeat.system.memory.success": 1, "metricbeat.system.network.events": 20, "metricbeat.system.network.success": 20, "metricbeat.system.process.events": 10, "metricbeat.system.process.success": 10, "metricbeat.system.process_summary.events": 1, "metricbeat.system.process_summary.success": 1, "metricbeat.system.uptime.events": 1, "metricbeat.system.uptime.success": 1}}}
2017-12-18T13:07:58.156-0500	INFO	[monitoring]	log/log.go:115	Total non-zero metrics	{"monitoring": {"metrics": {"beat.info.uptime.ms": 7862, "beat.memstats.gc_next": 5089280, "beat.memstats.memory_alloc": 2621032, "beat.memstats.memory_total": 190985680, "libbeat.config.module.running": 4, "libbeat.config.module.starts": 4, "libbeat.config.reloads": 1, "libbeat.output.type": "elasticsearch", "libbeat.pipeline.clients": 8, "libbeat.pipeline.events.active": 41, "libbeat.pipeline.events.filtered": 1, "libbeat.pipeline.events.published": 41, "libbeat.pipeline.events.retry": 82, "libbeat.pipeline.events.total": 42, "metricbeat.docker.info.events": 1, "metricbeat.docker.info.success": 1, "metricbeat.system.cpu.events": 1, "metricbeat.system.cpu.success": 1, "metricbeat.system.filesystem.events": 5, "metricbeat.system.filesystem.success": 5, "metricbeat.system.fsstat.events": 1, "metricbeat.system.fsstat.success": 1, "metricbeat.system.load.events": 1, "metricbeat.system.load.success": 1, "metricbeat.system.memory.events": 1, "metricbeat.system.memory.success": 1, "metricbeat.system.network.events": 20, "metricbeat.system.network.success": 20, "metricbeat.system.process.events": 10, "metricbeat.system.process.success": 10, "metricbeat.system.process_summary.events": 1, "metricbeat.system.process_summary.success": 1, "metricbeat.system.uptime.events": 1, "metricbeat.system.uptime.success": 1}}}
2017-12-18T13:07:58.156-0500	INFO	[monitoring]	log/log.go:116	Uptime: 7.867012418s
2017-12-18T13:07:58.156-0500	INFO	[monitoring]	log/log.go:93	Stopping metrics logging.
```
Instead of using flattened key names, write the metrics in "nested" format. There is less redundancy and it will be more machine friendly.

Sample logs:

```
2017-12-19T11:57:21.086-0500	INFO	[monitoring]	log/log.go:79	Starting metrics logging every 3s
2017-12-19T11:57:24.087-0500	INFO	[monitoring]	log/log.go:106	Non-zero metrics in the last 3s	{"monitoring": {"metrics": {"beat":{"info":{"uptime":{"ms":3004}},"memstats":{"gc_next":4194304,"memory_alloc":2298976,"memory_total":12648056}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":1,"events":{"active":1,"published":1,"total":1}}},"metricbeat":{"file_integrity":{"file":{"events":1,"success":1}}}}}}
2017-12-19T11:57:25.305-0500	INFO	[monitoring]	log/log.go:114	Total non-zero metrics	{"monitoring": {"metrics": {"beat":{"info":{"uptime":{"ms":4222}},"memstats":{"gc_next":4194304,"memory_alloc":3336376,"memory_total":20548168}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":0,"events":{"active":1,"published":1,"total":1}}}}}}
2017-12-19T11:57:25.305-0500	INFO	[monitoring]	log/log.go:115	Uptime: 4.226912514s
2017-12-19T11:57:25.305-0500	INFO	[monitoring]	log/log.go:92	Stopping metrics logging.
```
This patch fixes a race condition in the metricset test helper
* Auditbeat: Add setuid/setgid fields to file_integrity (elastic#5527)

Added two new fields: `setuid` and `setgid` which are only present
if the given bit is set in the file. The only possible value is `true`.
This fields are only used in POSIX platforms.

* Compact representation of flags in schema

* Corrections and typos
This allow more more efficient logging of MapStr objects. Something I expect that will be common.

zapcore.ObjectMarshaler: https://godoc.org/go.uber.org/zap/zapcore#ObjectMarshaler

```
Logging via logger.Infow("msg", logp.Reflected("mapstr", m)) which uses json.Marshal.
BenchmarkMapStrLogging-8   	  200000	      7433 ns/op	    1988 B/op	      43 allocs/op

With MapStr.MarshalLogObject implemented (no sorting).
Logging via logger.Infow("msg", "mapstr", m)
BenchmarkMapStrLogging-8   	  300000	      3986 ns/op	     304 B/op	       9 allocs/op

With MapStr.MarshalLogObject implemented (with sorting).
Logging via logger.Infow("msg", "mapstr", m)
BenchmarkMapStrLogging-8   	  300000	      4492 ns/op	     529 B/op	      17 allocs/op
```
By setting `logging.to_eventlog: true` all log output will be written
to the Application log. The source name will be name of the Beat.
* Add 	golang.org/x/crypto/blake2b to vendor

* Add blake2b hashing algorithm

Allow BLAKE2b as a file hashing algorithm (https://blake2.net/). In my benchmarks
it is quite fast.

| Hash Algorithm                  | Time per 100 MiB File (ns) | MiB / sec   |
|---------------------------------|----------------------------|-------------|
| BenchmarkHashFile/blake2b_256-8 | 127116193                  | 786.6818353 |
| BenchmarkHashFile/blake2b_512-8 | 127239979                  | 785.9165082 |
| BenchmarkHashFile/blake2b_384-8 | 129671956                  | 771.1767686 |
| BenchmarkHashFile/sha1-8        | 131347484                  | 761.3392884 |
| BenchmarkHashFile/md5-8         | 170146968                  | 587.7271936 |
| BenchmarkHashFile/sha512-8      | 200803749                  | 497.9986703 |
| BenchmarkHashFile/sha384-8      | 201153073                  | 497.1338419 |
| BenchmarkHashFile/sha512_224-8  | 201987854                  | 495.0792734 |
| BenchmarkHashFile/sha512_256-8  | 202126889                  | 494.7387282 |
| BenchmarkHashFile/sha256-8      | 297884549                  | 335.7005267 |
| BenchmarkHashFile/sha224-8      | 299384125                  | 334.0190466 |
| BenchmarkHashFile/sha3_224-8    | 335496603                  | 298.0656111 |
| BenchmarkHashFile/sha3_256-8    | 352318502                  | 283.834086  |
| BenchmarkHashFile/sha3_384-8    | 461460154                  | 216.7034339 |
| BenchmarkHashFile/sha3_512-8    | 651817080                  | 153.4172747 |
* Feature: Local Keystore to obfuscate sensitive information on disk

This PR allow users to define sensitive information into an obfuscated data store on disk instead of having them defined in plaintext in the yaml configuration.

This add a few users facing commands:

beat keystore create

beat keystore add output.elasticsearch.password

beat keystore remove output.elasticsearch.password

beat keystore list
The current implementation doesn't allow user to configure the secret with a custom password, this will come in future improvements of this feature.

* Changelog
A nice Apple Property List (plist) encoder/decoder library with
support for binary, XML, OpenStep and GNUStep formats.

BSD 2-clause license.

See https://github.com/DHowett/go-plist
This patch adds support to the file integrity module for reading
the kMDItemWhereFroms extended-attribute. This is used by MacOS
to store origin information for files obtained from an external
source, like the Internet, or transferred from another computer.

This information will be encoded in a new field, `origin`, consisting
of an array of strings.

For files downloaded from a web browser, the first string is the URL
of the source document. The second URL (optional), is the web address
where the download link was followed:
"origin": [
    "https://cdn.kernel.org/pub/linux/kernel/v4.x/ChangeLog-4.13.16",
    "https://www.kernel.org/"
]

For files or directories transferred via Airdrop, the origin is
the name of the computer that sent the file:
"origin": [
    "Adrian's MacBook Pro"
]

For files attached to e-mails (saved using Mail.app), the origin
consists of sender address, subject and e-mail identifier:
"origin": [
    "Adrian Serrano \[email protected]\u003e",
    "Sagrada Familia tickets",
    "message:%[email protected]%3E"
]

In case the kMDItemWhereFroms attribute is not present, the origin field
is abstent.
Somewhere in the Auditbeat module refactoring this multi-field got dropped.

This was originally added in elastic#5625.
This adds `type Digest []byte` for representing the result of a hash function.

This new type implements the Stringer and TextMarshaler interfaces so that the digest value is represented as a hexadecimal string.
Instead of reporting the raw version as encoded in the protocol (3.x),
identify SSL 3.0 (version 3.0) and TLS 1.x (versions 3.1 and up)
* Haproxy module: Initial refactor and tests for http stats
* Haproxy http stats with basic authentication
* Haproxy show info is not supported in http stats endpoint
* Added documentation for HAproxy http stats frontend
* Use errors library for errors in haproxy module
…#5920 (elastic#5963)

New docker prospector properly sends log entries in message
field (see elastic#5920). Remove unused POD_NAMESPACE env var from
filebeat manifest.
ruflin and others added 9 commits January 3, 2018 16:48
This popped up in different PRs. Here is one fixing it seperately.
* Add `NODE_OPTIONS="--max-old-space-size=4096"` to circumvent elastic/kibana#15683
* Make it more obvious in the Docker file which parts are copied over and which are from the official repo
* Remove xpack.monitoring config option as this seems to cause a rebuild.
* Increase LS timeout for health check
* Remove duplicated kafka health check
* Increase health check timeouts to 10min
* Set defaults for `ARGS` in kibana container. This makes it easy to also build the container directly.
* Adjusts docker image paths to all use `ES_BEATS` variable.
* Remove `detector_rules` for ML as not supported / required anymore in master.
* Remove unused `SHIELD=false` variable in compose file
* Fix geoip looksup that changed and improve error logging

It would be great if we could disable most of the Kibana x-pack plugins as they are not needed for our testing. But the problem is 1) we cannot install only 1 plugin AFAIK 2) disabling one of the plugins causes a rebuild which makes the build even longer.

Closes elastic#5953
Currently the dashboard directories are called 5.x and default. The naming default is a bit confusing as it is not clear what it means. In the current context it means the dashboards for all 6.x and 7.x releases. Unfortunately already now there are breaking changes in 7.x which means default does not work for both: elastic#5278

To allow us more flexibility the directory 5.x is renamed to 5 and the directory default is renamed to 6. This allows as to add a directory for 7 dashboards in the future. The reason I switched away from the `.x` naming is as this naming scheme also allows us to introduce for example 6.2 dashboards, meaning dashboards which only work with 6.2 and upwards in parallel to the 6 dashboards which work with all 6 releases.

Secretly removing newlines

Update auditbeat naming -> remove complete kafka directory as index pattern is now generated from fields.yml

rename all beats directories

This should not have any functional changes
This simplifies part of the condition tests to use table tests. Like this the expected result and the condition itself are close together in the code which makes it easier to read.

Cleanup was triggered when looking at elastic#5954 and realised it hard to follow in a diff.
…ved for kubernetes pods in state_pod (elastic#5980)

Make sure the active phase status is retrieved for kubernetes pods
@tsg tsg added the review label Jan 4, 2018
@ruflin
Copy link
Contributor

ruflin commented Jan 4, 2018

I think the module tests are failing because ES 6.2 has a different version of the geoip lookup library then 7.0. For some of the IP addresses these return geo locations. I would suggest we merge in the changes and do a follow up PR to adjust the locations?

@ruflin ruflin merged commit 654fd78 into elastic:6.x Jan 4, 2018
@ruflin
Copy link
Contributor

ruflin commented Jan 4, 2018

Merged, will do a follow up PR with the fix.

@tsg tsg deleted the automatic_merge_from_master_to_6.x_branch branch January 14, 2018 16:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.