-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fleet] Implement Kafka output form UI #143324
Comments
Pinging @elastic/fleet (Feature:Fleet) |
@jen-huang @nimarezainia Do we already have the design for this work? |
@jlind23 Yes we do, link to the designs can be found in the product definition doc in parent issue of this one. |
@jen-huang is the tech definition ready to be worked on in our next sprint? |
@jlind23 I'm still going to work on it this week. |
@jen-huang As you change this issue title to implement I believe the status should be changed to ready accordingly? Shall I also remove your assignment? |
I'm currently looking at the schema validation for the new kafka type and it would be much cleaner if we moved from
to
Of course we would keep the old endpoint for a few releases and mark it as deprecated. @kpollich suggested to redirect the code to the right path based on the |
As discussed, will move this to Sprint 12 and continue the work there. |
We will likely need to feature flag this as the Agent work will not be ready in the same release. However, we still want to enable customers to test SNAPSHOT builds of the agent once it is ready. I think we should use the "Advanced Settings" Kibana infra to do the feature flagging instead of kibana.yml settings to easily enable a customer to turn on this feature without having to reconfigure and restart Kibana. |
@criamico this will be included in our next sprint. @joshdover had a great idea about first delivering the API experience to unblock users and in a second PR work on the UI part. Both should land in separate releases if needed. What do you think? |
the API approach is fine as a first step. However what the users will need is the full UI capabilities. In otherwords, if the user uses the API to create the output and configure it, what would the other users see in the Fleet UI? |
@nimarezainia The API first approach has the benefit of unblocking Elastic Agent E2E tests with Kafka output, it does not necessarily imply that we should ship it to our users without any UI. |
This is a good question. We'll still have to make a few UI adjustments to make sure this new output type doesn't break our existing UIs. I would suggest just showing the output row for the Kafka outputs in the Settings tab, but disabling the edit button with a tooltip: "Use the Fleet API to edit this output" |
Thanks @juliaElastic. So is the Fleet API request format exactly the same as what is written into the Agent policy outputs:
default:
type: kafka
ca_trusted_fingerprint: 79f956a0175 |
There is some translation, e.g. |
Perhaps a silly question, but is there any integration with the other end of the kafka? we have https://docs.elastic.co/integrations/kafka_log and if I gathered the tickets correctly we also have an opinionated default name for topics so it would be great if some option was given to deploy an agent that consumes the topics for which integrations are sending and uses the default datastreams / pipelines, to ease rollout |
I have a question about topics and processors in general. It is unlikely that Endpoint will be able to support the full gamut of available processors, considering we do not have access to libbeat and would have to write all the parsing code from scratch in C++. Is there a minimum set of required processors that could maybe help us tone down the scope of what we will need to provide? cc: @nfritts @ferullo ref: https://www.elastic.co/guide/en/beats/filebeat/current/defining-processors.html |
@brian-mckinney I'm unsure how this ties into this issue specifically, but processors are generally there to edge processing typically it's to drop traffic / reduce payload, or collect additional (local context) information (add_*_metadata processors + dns). With kafka in the middle: Source -> shipper(agent/endpoint?) -> Kafka -> forwarder(agent/filebeat) -> Elasticsearch you still have the option to use all processors at the forwarder, though add_*_metadata processors aren't useful there as it would record things on the shipper, regardless it should improve things over the the current available options. And of source there's ingest pipelines for everything that doesn't require edge processing. I think it's a separate issue, but things like decode_* can be skipped and parse_aws_vpc_flow_log which seems like a poorly named decode_ variant. |
I fully agree that Endpoint should avoid re-implementing Beats processors. From my understanding, the Kafka output only makes use of the conditional processors for topic selection, which does pare down the list somewhat. But I wonder if we could ship the first version of this without support for dynamic topic selection at all and only support for the static If/when we do want to support dynamic topic selection, I think we could omit some conditions, like network or contains (covered by regexp). |
This is a good suggestion - but not considered at this point. It would make sense for our default and examples in docs to match up with what the Kafka input package expects. |
Thanks @joshdover. I'm very interested in the outcome of this discussion. Scoping out dynamic topic selection in the first version would definitely reduce the amount of effort and testing complexity (on Endpoint at least) for the first version. |
This PR addresses the UI aspect of #143324 Happy path https://github.com/elastic/kibana/assets/29123534/d1664e68-1fb6-42b8-8585-d7132c47d76f --------- Co-authored-by: kibanamachine <[email protected]>
This PR addresses the UI aspect of elastic#143324 Happy path https://github.com/elastic/kibana/assets/29123534/d1664e68-1fb6-42b8-8585-d7132c47d76f --------- Co-authored-by: kibanamachine <[email protected]>
@joshdover & @brian-mckinney dynamic topic selection is an attractive aspect of this solution. I have had a few customers engaging on that. However given where we are and the fact that this will be a beta to begin with, I think it's fair to address this as a followup. I will communicate this to our Beta candidates when the time comes. |
Could someone please clarify what Authentication methods will be available in the first phase? (appreciate it thx) |
@szwarckonrad Could you clarify which options we ended up implementing the UI for this first phase? @nimarezainia I think we're also limited by what Endpoint ends up supporting, which is still in progress. @brian-mckinney should be able to help clarify this. |
@joshdover Following the mockups I went with UI for username/password and SSL |
thank you. I just need to know these limitations as we engage the beta customers. |
Reopening for further testing. |
Hi @kevinlog Could you please share a guide with valid values to be filled in the various fields for Kafka output configuration and if possible can share a demo recording for testing this feature? It will be very helpful for us to understand the working and test this feature. Thanks! |
@amolnater-qasource - yes, we can work towards that. @nimarezainia @faec - can I work with you to provide a good test config for the Kafka output? In addition, we could record a demo during the integration testing on the shared server. |
This PR addresses the UI aspect of elastic#143324 Happy path https://github.com/elastic/kibana/assets/29123534/d1664e68-1fb6-42b8-8585-d7132c47d76f --------- Co-authored-by: kibanamachine <[email protected]>
@kevinlog Can we close this issue or carry over to Sprint 16? |
@juliaElastic Closing this as the latest PR has now been merged. |
We have executed 08 testcases under the Feature test run for the 8.10.0 release at the link: Status:
Further 01 testcase is pending, and we would require some help in executing this: Could anyone please help us setting correct details in the processor? Logs from the main topic- Here, we aim to get data under Build details: Please let us know if anything else is required from our end. cc: @jlind23 |
Thank you for resolving @pierrehilbert
The pending testcase execution is also now done under Feature test plan at : Kafka Output Status: As the testing is completed on this feature, we are marking this as QA:Validated. Please let us know if any other scenario is required to be tested from our end. Thanks! |
Kafka output UI
Similar to Logstash output, we need to add the option for users to specify Kafka as an output option for their data. In 8.8, this UI will be hidden behind an experimental flag as the shipper portion is not ready until 8.9.
Tasks
kibana.yml
) supports itelastic-agent.yml
fields (most are the same, but there are a few differences due to needing information for UI)API
The output API should support a new output type:
kafka
.See Kafka Output type
This output type should have the following properties:
UI tasks
Specify the URLs that your agents will use to connect to Kafka. For more information, see the Fleet User Guide
Add row
button belowtopics[]
array (text input box)Elastic Agent
gzip
and compression level 4none
snappy
lz4
gzip
gzip
, also show field for compression levelDefine how long a Kafka server waits for data in the same cluster
Define how long an Agent would wait for a response from Kafka Broker
Define the number of messages buffered in output pipeline
Wait for local commit
Reliability level required from the broker
Wait for local commit
Wait for all replicas to commit
Do not wait
If configured, the event key can be extracted from the event using a format string
Designs
Open questions
The text was updated successfully, but these errors were encountered: