forked from elastic/beats
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update code to master #2
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Move elb metricset to ga * update changelog
With retention leases, users do not need to set `index.soft_deletes.retention.operations` to replicate a Beats/APM index using cross-cluster replication. This removes a related note in the APM and Beats documentation.
* create metricset * work in prog * Work on metricset * Add serviceType config option * Fix tests * Add func to retrieve on all dimensions * work on different intervals * revert custom event format * work on reducing api calls * fix fields and dashboards, work on reducing amount of api calls * Add json example * refactor * refactor * fix tests * fix tests * feedback
* commit of system/network_summary
- maps all fields in CloudTrail events - requestParameters, responseElements, additionalEventData & serviceEventDetails are string representations - add event.original - add event.type - add event.kind - add event.outcome - run geoip processor - run agent processor - populated related.user array when possible - uses s3input - CloudTrail must write to S3 bucket, and send all Create Events to an SQS queue we listen to Fixes #14657
…15441) * refactor of build tooling for docker plugin
* add new page statistics to system/memory * update field descriptions * make update * update data.json * update system tests * update changelog * fix conflicts * try to fix system python tests
…es (#14875) * Refactor metagen to allow multiple resources to be enriched
Packetbeat now outputs TLS fields from ECS 1.3+: - The additional information not covered by ECS is nested under tls.detailed. - Fields already in ECS are removed from detailed to avoid bloat. - A new configuration flag tls.include_detailed_fields allows to toggle the inclusion of extra fields. It's enabled by default. Caveats: - Originally it would output the top-level certificate in tls.server_certificate and the rest under tls.server_certificate_chain. ECS mandates that tls.server.certificate and tls.server.certificate_chain are mutually exclusive. To avoid confusion, a chain is always generated, even if it consists of a single certificate. - Same for tls.client certificates. - The behavior of the configuration options tls.send_certificates and tls.include_raw_certificates has slightly changed. Non-populated TLS ECS fields: - tls.curve: Not implemented. Requires parsing the server key exchange. - tls.server.ja3s: JA3s is not implemented yet.
* Add Azure Storage Dashboards * Downgrade migration version
ContainerStart wrapper used in tests was not waiting for the pull to finish, causing flakiness, e.g. in TestDockerStart. Pull can be waited to finish by calling to Close() in the response body. This is also something that has to be done at some moment to avoid leaks. Also add some more context on error messages.
This PR introduces the support for Google Cloud Platform to Functionbeat. This branch is located in the `elastic/beats` repository, so anyone on our team has access to it. ### Manager #### Authentication To use the API to deploy, remove and update functions, users need to set the environment variable `GOOGLE_APPLICATION_CREDENTIALS`. This variable should point to a JSON file which contains all the relevant information for Google to authenticate. (About authentication for GCP libs: https://cloud.google.com/docs/authentication/getting-started) #### Required roles * Cloud Functions Developer * Cloud Functions Service Agent * Service Account User * Storage Admin * Storage Object Admin Note: Cloud Functions Developer role is in beta. We should not make GCP support GA, until it becomes stable. #### Configuration ```yaml # Configure functions to run on Google Cloud Platform, currently, we assume that the credentials # are present in the environment to correctly create the function when using the CLI. # # Configure which region your project is located in. functionbeat.provider.gcp.location_id: "europe-west1" # Configure which Google Cloud project to deploy your functions. functionbeat.provider.gcp.project_id: "my-project-123456" # Configure the Google Cloud Storage we should upload the function artifact. functionbeat.provider.gcp.storage_name: "functionbeat-deploy" functionbeat.provider.gcp.functions: ``` #### Export Function templates can be exported into YAML. With this YAML configuration, users can deploy the function using the [Google Cloud Deployment Manager](https://cloud.google.com/deployment-manager/). ### New functions #### Google Pub/Sub A function under the folder `pkg/pubsub` is available to get events from Google Pub/Sub. ##### Configuration ```yaml # Define the list of function availables, each function required to have a unique name. # Create a function that accepts events coming from Google Pub/Sub. - name: pubsub enabled: false type: pubsub # Description of the method to help identify them when you run multiples functions. description: "Google Cloud Function for Pub/Sub" # The maximum memory allocated for this function, the configured size must be a factor of 64. # Default is 256MiB. #memory_size: 256MiB # Execution timeout in seconds. If the function does not finish in time, # it is considered failed and terminated. Default is 60s. #timeout: 60s # Email of the service account of the function. Defaults to {projectid}@appspot.gserviceaccount.com #service_account_email: {projectid}@appspot.gserviceaccount.com # Labels of the function. #labels: # mylabel: label # VPC Connector this function can connect to. # Format: projects/*/locations/*/connectors/* or fully-qualified URI #vpc_connector: "" # Number of maximum instances running at the same time. Default is unlimited. #maximum_instances: 0 trigger: event_type: "providers/cloud.pubsub/eventTypes/topic.publish" resource: "projects/_/pubsub/myPubSub" #service: "pubsub.googleapis.com" # Optional fields that you can specify to add additional information to the # output. Fields can be scalar values, arrays, dictionaries, or any nested # combination of these. #fields: # env: staging # Define custom processors for this function. #processors: # - dissect: # tokenizer: "%{key1} %{key2}" ``` #### Google Cloud Storage A function under the folder pkg/storage is available to get events from Google Cloud Storage. ##### Configuration ```yaml # Create a function that accepts events coming from Google Cloud Storage. - name: storage enabled: false type: storage # Description of the method to help identify them when you run multiples functions. description: "Google Cloud Function for Cloud Storage" # The maximum memory allocated for this function, the configured size must be a factor of 64. # Default is 256MiB. #memory_size: 256MiB # Execution timeout in seconds. If the function does not finish in time, # it is considered failed and terminated. Default is 60s. #timeout: 60s # Email of the service account of the function. Defaults to {projectid}@appspot.gserviceaccount.com #service_account_email: {projectid}@appspot.gserviceaccount.com # Labels of the function. #labels: # mylabel: label # VPC Connector this function can connect to. # Format: projects/*/locations/*/connectors/* or fully-qualified URI #vpc_connector: "" # Number of maximum instances running at the same time. Default is unlimited. #maximum_instances: 0 # Optional fields that you can specify to add additional information to the # output. Fields can be scalar values, arrays, dictionaries, or any nested # combination of these. #fields: # env: staging # Define custom processors for this function. #processors: # - dissect: # tokenizer: "%{key1} %{key2}" ``` ### Vendor * `cloud.google.com/go/functions/metadata` * `cloud.google.com/go/storage`
- Allow for zero scope fields in options template NetFlow v9 spec allows for options templates that contain no scope fields. The netflow input was treating this case as an error and discarding the template, but that is only applicable to IPFIX. - Use additional fields to populate bytes/pkt counters Some devices out there (Cisco NSEL) use fields 231/232 as bytes counters, when those are supposed to be layer 4 payload counters. This updates the ECS fields populator to use those fields when the expected ones are not found. - Support a classId of 32 bits While the spec mandates a classId of 8 bits, some Cisco ASA devices actually use a 32 bit version of this field. This patches the field to allow up to 32-bit integers and updates the index pattern to use `long` for the `netflow.class_id` field. - Add more fields from v9 Cisco devices Fixes #14212
Co-authored-by: DeDe Morton <[email protected]> Co-authored-by: Sophia Xu <[email protected]>
* Handle error message in handleS3Objects function * remove s3Context.Fail and use setError and done instead * Add changelog
This dashboard wasn't updated after a couple of fields were renamed. Fixes: #15420
* Add test for publisher queue encode and decode. * Run mage fmt. * Fixes from code review.
…ger (#15557) The conversion failed when for strings with leading zeroes and a decimal digit 8 or 9, as the underlying runtime function would try to parse that as an octal number. This is fixed by only allowing decimal and hex, which in turns makes the processor more aligned to its Elasticsearch counterpart. Fixes #15513
This PR adds a new mage target to Functionbeat named `buildPkgForFunction`. It generates the folder `pkg` with the functions to make testing the manager more comfortable during development.
Use of `type: array` in some fields (which was inconsistent) causes those fields to be excluded from the template. This prevents pointing aliases to those fields, which we need in 7.6+. Setting those fields to `keyword` explicitly so that they are included in the template. Fixes #15588
…wing a PR (#15388) * Add a PR template that provides valuable information when reviewing a PR * Add CLA check * Fix typo * Address comments during review * SF: Fix typo * Add deprecation as PR type * Make it clear how to strike through in markdown * Add default configuration files to the checklist
* Modify cockroachdb source * Define testdata * Do not publish ports * Update docs * mage fmt update * Describe containerized environment * Update CHANGELOG.next.asciidoc Co-Authored-By: Chris Mark <[email protected]> * Update data.json * Rename image * Update source after review * Filter ibmmq_ metrics * mage check * Fix: mage check * Don't expose port * Rename status to qmgr * Add subscriptions overview dashboard for IBM MQ module * Add calls, messages overview dashboard for IBM MQ module * Add screenshots * Fix: mage check * Fix: CHANGELOG * Add explanation * Fix: mage check Co-authored-by: Chris Mark <[email protected]>
* Cleanup changelogs for master * Remove extra header in CHANGELOG.asciidoc
* Add lambda metricset
to either to --> either to Co-authored-by: NathanSegers <[email protected]>
…6438) * Change sqs metricset to use Average statistic method * re-export dashboard for sqs * update changelog * Add terms_field back into sqs visualizations
#16402) * Change aws_elb autodiscover provider field name to aws.elb.* * add changelog
* Add region to googlecloud module config * Add changelog * add warning when zone and region are both provided in config * add more unit test for getFilterForMetric * check if instance is nil before checking labels/machinetype
* Improve ECS field mappings in aws module - elb fileset + cloud.provider + event.category + event.kind + event.outcome + http.response.status_code, convert to long + http.request.method, lowercase + tracing.trace.id - s3access fileset + client.address + client.ip + geo + client.user.id + cloud.provider + event.action + event.code + event.duration + event.id + event.kind + event.outcome + http.request.referrer + http.response.status_code + related.ip + related.user + user_agent - vpcflow fileset + cloud.provider + cloud.account.id + cloud.instance.id + event.kind Closes #16154
* match reference.yml to code default (#16476) * make update Co-authored-by: Dan Roscigno <[email protected]>
* Fix disk used visualization in system host overview dashboard This change updates the `Disk Used` to honor default collecting period for `fsstat` metricset (1m). Visualization sets the query to `>=1m` to make sure we get a big enough bucket size when querying fsstat metrics. * Put `CHANGEME_HOSTNAME` back in the dashboard
This PR adds support for redis URL schema when configuring the hosts. Each url can have it's own password, overwriting the outputs password setting. The URL scheme used can enable or disable TLS support as well. Using `redis` we always disable TLS, but when using `rediss` TLS will be enabled.
* Adding breaking change to 7.6.0 doc * Fixing formatting
In the previous module assertion we were checking the keys manually and if they didn't match we would just output the raw dictionary into the tests. This is not really useful because you have to either inspect them manually or user a local diff to know what exactly changes. This PR use DeepDiff to actually diff the two dictionary together and will output diff of the two dictionnary. This make debugging a little easier.
…processors. (#16450) * Update go-ucfg to 0.8.3 to use the new field merging adjustments. Update fileset input merging to replaces paths and append procesors. * Add changelog entry.
Include x-pack libbeat in general targets like make check or make clean. Not having it in make check gave less visibility on format errors or other checks.
New screenshots from Kibana 7.5.0 for Suricata alerts and events.
This allows passing in context for http requests made to Kibana. Requirement for elastic/apm-server#3185
There are some processors that keep resources that would need to be explicitly released when the processor is not needed anymore. At this moment there is no way to do it, processors have a stateless interface, so avoid using these processors in scripts. If these processors are needed, it is usually better to place them in global configuration. Processors removed are the ones used to add docker and kubernetes metadata.
* Update redis host settings docs The redis host setting accepts passwords and URL schemas by now. This change update the reference config files and docs.
- event.category - event.kind - event.outcome - event.type - related.ip - switch haproxy pipeline to yaml Closes #16162
…16468) * Add vars * changelog * temp * line * update * take test out
* Add cloudfoundry common client into x-pack/common. * Run mage fmt. * Add support for tlscommon.Config, removing the uaago depedency as it didn't expose overriding the TLSConfig. Add comment for location of documentation for event types. * Use common.Cache with addition of not updated expiration on get, instead of the kubernetes ttl cache. Fix other suggestions. * Fix cache_test and new test for not updated access time. * Handle error sooner in getAuthTokenWithExpiresIn.
## What does this PR do? This PR changes the type of `timeout` option of GCP functions to `string` from `time.Duration`. ## Why is it important? The option was parsed as `time.Duration` and then converted to `string` when creating the payload to upload a function. However, the format of the converted value was not accepted by GCP. This prevented users from setting the timeout from the manager.
* Add translate_sid processor to Winlogbeat The `translate_sid` processor translates a Windows security identifier (SID) into an account name. It retrieves the name of the account associated with the SID, the first domain on which the SID is found, and the type of account. Closes #7451
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.