-
Notifications
You must be signed in to change notification settings - Fork 460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tenable_io] Initial Release for the Tenable.io #4816
Conversation
Pinging @elastic/security-external-integrations (Team:Security-External Integrations) |
🌐 Coverage report
|
@vinit-elastic can we ensure the integration is shipping as Beta, not Tech Preview (as per the screenshots). |
Hey @jamiehynds, We have taken a screenshot of Tenable.io integration in stack version 8.6.0-SNAPSHOT, so that's why the Technical Preview tag may be there. Also, we have checked with the latest elastic stack version, 8.5.3, in which we are getting the beta tag. Here I am sharing the screenshot. |
packages/tenable_io/data_stream/asset/elasticsearch/ingest_pipeline/default.yml
Show resolved
Hide resolved
return false; | ||
} | ||
dropEmptyFields(ctx); | ||
- append: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one is a nice usage of the new kind type 👍
- append: | ||
field: error.message | ||
value: '{{{ _ingest.on_failure_message }}}' | ||
- append: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need it in multiple places?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have added multiple on_failures on the potential failure points in the pipeline such as convert, date processors, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vinit-elastic what Marius meant was event.kind
was already set to pipeline_error
earlier in the pipeline.
Do we need to again append
inside on_failure
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we have handled the scenario where the pipeline goes to fail due to an uncertain situation in processors(rename, script, etc.). So this situation will skip the first pipeline_error and the control goes to on_failure at last and the pipeline error will append as expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense! But how about the other way around? I mean is error.message
populated even without any failures in the pipeline? If not, then we could remove the earlier processor and just let the on_failure
set event.kind
to pipeline_error
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The answer to your question is, no. We are not setting the error message in any other case than a pipeline error in this integration.
However, we have added on_failures in all the convert processors. And if that failure would be caught by the subsequent on_failure and not the on_failure
on the root. Moreover, we do not want to add unnecessary processing by setting event.kind
in all the on_failures
. Hence, we have added one on the bottom to check if there are any error messages present and if so, append the event.kind
with pipeline_error
as the on_failure
on the root won't run if the errors are caught by the on_failure associated with the processors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool, makes sense! Thanks!
description: Collect asset logs from Tenable.io. | ||
template_path: httpjson.yml.hbs | ||
vars: | ||
- name: interval |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one seems to be missing proxy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have added a proxy at the package level. Here's the like to that line:
- name: proxy_url |
- name: access_key | ||
type: password | ||
title: Access Key | ||
description: Access key for the Tenable.io API. | ||
multi: false | ||
required: true | ||
show_user: true | ||
- name: secret_key | ||
type: password | ||
title: Secret Key | ||
description: Secret key for the Tenable.io API. | ||
multi: false | ||
required: true | ||
show_user: true | ||
- name: proxy_url | ||
type: text | ||
title: Proxy URL | ||
multi: false | ||
required: false | ||
show_user: false | ||
description: URL to proxy connections in the form of http[s]://<user>:<password>@<server name/ip>:<port>. Please ensure your username and password are in URL encoded format. | ||
- name: ssl | ||
type: yaml | ||
title: SSL Configuration | ||
description: i.e. certificate_authorities, supported_protocols, verification_mode etc. | ||
multi: false | ||
required: false | ||
show_user: false | ||
default: | | ||
#certificate_authorities: | ||
# - | | ||
# -----BEGIN CERTIFICATE----- | ||
# MIIDCjCCAfKgAwIBAgITJ706Mu2wJlKckpIvkWxEHvEyijANBgkqhkiG9w0BAQsF | ||
# ADAUMRIwEAYDVQQDDAlsb2NhbGhvc3QwIBcNMTkwNzIyMTkyOTA0WhgPMjExOTA2 | ||
# MjgxOTI5MDRaMBQxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB | ||
# BQADggEPADCCAQoCggEBANce58Y/JykI58iyOXpxGfw0/gMvF0hUQAcUrSMxEO6n | ||
# fZRA49b4OV4SwWmA3395uL2eB2NB8y8qdQ9muXUdPBWE4l9rMZ6gmfu90N5B5uEl | ||
# 94NcfBfYOKi1fJQ9i7WKhTjlRkMCgBkWPkUokvBZFRt8RtF7zI77BSEorHGQCk9t | ||
# /D7BS0GJyfVEhftbWcFEAG3VRcoMhF7kUzYwp+qESoriFRYLeDWv68ZOvG7eoWnP | ||
# PsvZStEVEimjvK5NSESEQa9xWyJOmlOKXhkdymtcUd/nXnx6UTCFgnkgzSdTWV41 | ||
# CI6B6aJ9svCTI2QuoIq2HxX/ix7OvW1huVmcyHVxyUECAwEAAaNTMFEwHQYDVR0O | ||
# BBYEFPwN1OceFGm9v6ux8G+DZ3TUDYxqMB8GA1UdIwQYMBaAFPwN1OceFGm9v6ux | ||
# 8G+DZ3TUDYxqMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAG5D | ||
# 874A4YI7YUwOVsVAdbWtgp1d0zKcPRR+r2OdSbTAV5/gcS3jgBJ3i1BN34JuDVFw | ||
# 3DeJSYT3nxy2Y56lLnxDeF8CUTUtVQx3CuGkRg1ouGAHpO/6OqOhwLLorEmxi7tA | ||
# H2O8mtT0poX5AnOAhzVy7QW0D/k4WaoLyckM5hUa6RtvgvLxOwA0U+VGurCDoctu | ||
# 8F4QOgTAWyh8EZIwaKCliFRSynDpv3JTUwtfZkxo6K6nce1RhCWFAsMvDZL8Dgc0 | ||
# yvgJ38BRsFOtkRuAGSf6ZUwTO8JJRRIFnpUzXflAnGivK9M13D5GEQMmIl6U9Pvk | ||
# sxSmbIUfc2SGJGCJD4I= | ||
# -----END CERTIFICATE----- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be a slight nitpick, but there has been several occasions in which users are required to have different setups per type of datastream, so I am a bit hesitant to leave these at the top of the integration.
However we can leave them there for now, and modify it if necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @P1llus, The authorization parameter has been added at the package level. If we shift the SSL and proxy parameters at the data stream level, then we must shift the authorization parameter at the same level. Additionally, we are following the same pattern as all the previous connectors. However, If you think changes are required, we can change it.
- append: | ||
field: error.message | ||
value: '{{{ _ingest.on_failure_message }}}' | ||
- append: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vinit-elastic what Marius meant was event.kind
was already set to pipeline_error
earlier in the pipeline.
Do we need to again append
inside on_failure
?
packages/tenable_io/data_stream/asset/elasticsearch/ingest_pipeline/default.yml
Outdated
Show resolved
Hide resolved
packages/tenable_io/data_stream/asset/elasticsearch/ingest_pipeline/default.yml
Outdated
Show resolved
Hide resolved
packages/tenable_io/data_stream/vulnerability/elasticsearch/ingest_pipeline/default.yml
Show resolved
Hide resolved
I know this is still in beta, but is there a way to add this integration myself to my Elastic Stack, without screwing up the integration when it comes out via the normal release? |
@infosecwatchman glad to hear you're interested in using this integration. We're just wrapping up some final reviews, so it will be easier to wait until it's publicly available. @vinit-elastic @kcreddy are there any loose ends on the PR review or are we ready to merge? |
Great! Do I have to upgrade my stack once this is merged? |
Possibly. The integration will require v8.4 or later. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍🏼
I think I was just waiting for @P1llus review to be resolved as well. Looks like we are good. I will merge it now. |
Package tenable_io - 0.1.0 containing this change is available at https://epr.elastic.co/search?package=tenable_io |
How long does it take for new packages to be loaded into the repository? |
Hey @infosecwatchman - the integration is now available, however it requires the latest version of the stack, 8.6. |
Hello @infosecwatchman, Integration is designed in a way that for the first call, all the asset events will be ingested, and then onwards only those assets events will be ingested whose 'updated_at' field is updated or new assets are added. Moreover, we are working on an enhancement where a user faced an issue with a very large export job. Can you please confirm if it is the same case as yours or not? If not would you mind raising a new issue in GitHub? Thank you. |
What does this PR do?
Integration release checklist
This checklist is intended for integrations maintainers to ensure consistency
when creating or updating a Package, Module or Dataset for an Integration.
All changes
New Package
Dashboards changes
Log dataset changes
How to test this PR locally
Related issues
Screenshots