Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Automatic import] readme input types templates #194308

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,29 @@ Check the [datastreams guidelines](https://www.elastic.co/guide/en/integrations-

## Requirements

Elastic Agent must be installed. For more information, refer to the link [here](https://www.elastic.co/guide/en/fleet/current/elastic-agent-installation.html).
haetamoudi marked this conversation as resolved.
Show resolved Hide resolved

#### Installing and managing an Elastic Agent:

You have a few options for installing and managing an Elastic Agent:

#### Install a Fleet-managed Elastic Agent (recommended):

With this approach, you install Elastic Agent and use Fleet in Kibana to define, configure, and manage your agents in a central location. We recommend using Fleet management because it makes the management and upgrade of your agents considerably easier.
haetamoudi marked this conversation as resolved.
Show resolved Hide resolved

#### Install Elastic Agent in standalone mode (advanced users):

With this approach, you install Elastic Agent and manually configure the agent locally on the system where it’s installed. You are responsible for managing and upgrading the agents. This approach is reserved for advanced users only.

#### Install Elastic Agent in a containerized environment:

You can run Elastic Agent inside a container, either with Fleet Server or standalone. Docker images for all versions of Elastic Agent are available from the Elastic Docker registry, and we provide deployment manifests for running on Kubernetes.


haetamoudi marked this conversation as resolved.
Show resolved Hide resolved
You need Elasticsearch for storing and searching your data and Kibana for visualizing and managing it.
You can use our hosted Elasticsearch Service on Elastic Cloud, which is recommended, or self-manage the Elastic Stack on your own hardware.


haetamoudi marked this conversation as resolved.
Show resolved Hide resolved
The requirements section helps readers to confirm that the integration will work with their systems.
Check the [requirements guidelines](https://www.elastic.co/guide/en/integrations-developer/current/documentation-guidelines.html#idg-docs-guidelines-requirements) for more information.

Expand All @@ -20,8 +43,27 @@ Check the [requirements guidelines](https://www.elastic.co/guide/en/integrations
Point the reader to the [Observability Getting started guide](https://www.elastic.co/guide/en/observability/master/observability-get-started.html) for generic, step-by-step instructions. Include any additional setup instructions beyond what’s included in the guide, which may include instructions to update the configuration of a third-party service.
Check the [setup guidelines](https://www.elastic.co/guide/en/integrations-developer/current/documentation-guidelines.html#idg-docs-guidelines-setup) for more information.

### Enabling the integration in Elastic:

#### Create a new integration from a Zip file (optional)
1. In Kibana go to Management > Integrations.
2. Select "Create new integration".
3. Select "Upload it as a .zip".
4. Upload the .zip.
haetamoudi marked this conversation as resolved.
Show resolved Hide resolved
5. Select "Add to elastic".
haetamoudi marked this conversation as resolved.
Show resolved Hide resolved

### Install the integration
1. In Kibana go to Management > Integrations.
haetamoudi marked this conversation as resolved.
Show resolved Hide resolved
2. In "Search for integrations" search bar, type {{ package_name }}.
3. Click on the "{{ package_name }}" integration from the search results.
4. Click on the "Add {{ package_name }}" button to add the integration.
5. Add all the required integration configuration parameters.
6. Click on "Save and continue" to save the integration.

## Troubleshooting (optional)

- If some fields appear conflicted under the ``logs-*`` or ``metrics-*`` data views, this issue can be resolved by [reindexing](https://www.elastic.co/guide/en/elasticsearch/reference/current/use-a-data-stream.html#reindex-with-a-data-stream) the concerned data stream.
haetamoudi marked this conversation as resolved.
Show resolved Hide resolved

Provide information about special cases and exceptions that aren’t necessary for getting started or won’t be applicable to all users. Check the [troubleshooting guidelines](https://www.elastic.co/guide/en/integrations-developer/current/documentation-guidelines.html#idg-docs-guidelines-troubleshooting) for more information.

## Reference
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
### Collecting logs from CloudWatch
haetamoudi marked this conversation as resolved.
Show resolved Hide resolved

When collecting logs from CloudWatch is enabled, users can retrieve logs from all log streams in a specific log group. `filterLogEvents` AWS API is used to list log events from the specified log group. Amazon CloudWatch Logs can be used to store log files from Amazon Elastic Compute Cloud(EC2), AWS CloudTrail, Route53, and other sources.

{% include "ssl-tls.md.njk" %}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add final new line? (also below)

Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
### Collecting logs from S3 bucket
haetamoudi marked this conversation as resolved.
Show resolved Hide resolved

When collecting logs from S3 bucket is enabled, users can retrieve logs from S3 objects that are pointed to by S3 notification events read from an SQS queue or directly polling list of S3 objects in an S3 bucket.
haetamoudi marked this conversation as resolved.
Show resolved Hide resolved

The use of SQS notification is preferred: polling list of S3 objects is expensive in terms of performance and costs and should be preferably used only when no SQS notification can be attached to the S3 buckets. This input integration also supports S3 notification from SNS to SQS.
haetamoudi marked this conversation as resolved.
Show resolved Hide resolved

SQS notification method is enabled setting `queue_url` configuration value. S3 bucket list polling method is enabled setting `bucket_arn` configuration value and `number_of_workers` value. Both `queue_url` and `bucket_arn` cannot be set at the same time and at least one of the two value must be set.
haetamoudi marked this conversation as resolved.
Show resolved Hide resolved

#### To collect data from AWS SQS, follow the below steps:
1. If data forwarding to an AWS S3 Bucket hasn't been configured, then first setup an AWS S3 Bucket as mentioned in the above documentation.
2. Follow the steps below for each data stream that has been enabled:
1. Create an SQS queue
- To setup an SQS queue, follow "Step 1: Create an Amazon SQS queue" mentioned in the [Amazon documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html).
- While creating an SQS Queue, please provide the same bucket ARN that has been generated after creating an AWS S3 Bucket.
2. Setup event notification from the S3 bucket using the instructions [here](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-event-notifications.html). Use the following settings:
- Event type: `All object create events` (`s3:ObjectCreated:*`)
- Destination: SQS Queue
- Prefix (filter): enter the prefix for this data stream, e.g. `alert_logs/`
- Select the SQS queue that has been created for this data stream

**Note**:
- A separate SQS queue and S3 bucket notification is required for each enabled data stream.
- Permissions for the above AWS S3 bucket and SQS queues should be configured according to the [Filebeat S3 input documentation](https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-input-aws-s3.html#_aws_permissions_2)
- Data collection via AWS S3 Bucket and AWS SQS are mutually exclusive in this case.

{% include "ssl-tls.md.njk" %}
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
### Collecting logs from Azure Storage
haetamoudi marked this conversation as resolved.
Show resolved Hide resolved

#### Create a Storage account container

To create the storage account:

1. Sign in to the [Azure Portal](https://portal.azure.com/) and create your storage account.
2. While configuring your project details, make sure you select the following recommended default settings:
- Hierarchical namespace: disabled
- Minimum TLS version: Version 1.2
- Access tier: Hot
- Enable soft delete for blobs: disabled
- Enable soft delete for containers: disabled

3. When the new storage account is ready, you need to take note of the storage account name and the storage account access keys, as you will use them later to authenticate your Elastic application’s requests to this storage account.

##### How many Storage account containers?

The Elastic Agent can use one Storage account container for all integrations.

#### Running the integration behind a firewall

When you run the Elastic Agent behind a firewall, to ensure proper communication with the necessary components, you need to allow traffic on port `443` for the Storage Account container.

##### Storage Account Container

Port `443` is used for secure communication with the Storage Account container. This port is commonly used for HTTPS traffic. By allowing traffic on port 443, the Elastic Agent can securely access and interact with the Storage Account container, which is essential for storing and retrieving checkpoint data for each event hub partition.

{% include "ssl-tls.md.njk" %}
Loading