-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple issues - Fleet integrations across multiple spaces confuses Kibana/Fleet #3434
Comments
This is on all Elastic Cloud deployments we have by the way, all on v8.2.0. |
Related issue... when space saved objects are deployed they keep changing ID, so for instance the dasboard for Abuse.CH TI should be ti_abusech-c0d8d1f0-3b20-11ec-ae50-2fdf1e96c6a6 but once content is deployed in a non-default space it'll get a randomly generated ID such as 8db2585b-17ae-4bd2-add2-59a3c55d6c65 This is despite using API endpoints such as:
The same behaviour happens when copying things from the default space to other spaces. Regardless of whether this is done via the Kibana webUI or straight API requests. Net result is that built-in features such as TI presentation on the Security Overview page no longer "see" the dashboards as being installed, as they don't exist with the correct ID's. This is what the ID should as an example for a single default space deployment,
In a multi-space deployment scenario the id changes, but the original ID seems to be maintained as originId, e.g.
|
Same thing with saved object import API, e.g. POST {{kibana_url}}/s/{{space_id}}/api/saved_objects/_import?createNewCopies=false NDJSON file as multipart/form-data,
Gets this response clearly showing the object ID's being nuked and replaced,
|
Cool... so, mostly an 8.x upgrade artefact, core issues appear to be related to all of this: https://www.elastic.co/guide/en/kibana/current/sharing-saved-objects.html And the fact Fleet is largely space unaware. So in summary,
NOTE: This was actually the case in 7.x and below it seems, the behaviour changed in 8.x.
Indeed the integration assets just won't be available in other spaces where other instances of the integration are in use (unless you copy/import them there. At which point kibana will enforce a change of object ID.
Neat. Can someone take this on as a bug fix please? For anyone reading along and hitting the same issue/s... here's what seems like a sane way to deal with this situation right now:
NOTE: You can also get all of the relevant integration assets installed into a space via,
|
We have noticed all of the above as well and Fleet's lack of support for spaces has been a deal breaker for us transitioning. To add to this, even adding namespaces to the integrations does not resolve this problem since integration names still need to be unique across namespaces (which from a programing perspective makes me question the use of a namespace). It would be nice if this issue saw more traction, because in my opinion it's a gigantic oversight in terms of bringing multi-tenancy to Elastic. |
We do have exactly the same issue. Furthermore, we do have issues where the Integrations suddenly start to be installed to random spaces but the referenced data view still exists in another space that leads to broken dashboards. It seems as the current solution or workaround is not to use spaces at all when using integration. We do have a large elastic-stack running in an enterprise environment but this lack of consistency with the integrations and spaces make the out-of-the-box dashboards useless. Furthermore they randomly start to reinstall in different spaces confusing the users and there is no way except manually deleting the integration. We currently have integrations that are in a broken state and require the removal of all objects in the spaces. But even then the reinstall routine somehow automatically starts to add objects everywhere resulting in multiple duplicated data views that initially work but break after restarting kibana. very funky. In addition it seems as there are old objects from past integration versions of the same integrations that are not used anymore but still exists in the space under saved objects. It would make sense to cleanup the saved objects from integrations that are not used anymore. Very challenging is that the integrations should only be installed from certain spaces. This confuses the users and nobody gets it right because checking in which space you are before installing the integration is something that is forgotten very fast. I've already posted this issue in the forum, hopefully creating some traction because it baffels me that this doesnt' work correctly. I'm sure many others have this issue too. |
Same issue here. I have a case with support but it is a pain to try to solve this |
Hi @colin-stubbs , fun fact or not but im facing literraly to the same issue. This multi tenancy concept from elastic looks like not complete and will not be concedered by dev team when i check the date of your report... I hope elastic will do something on this part because it can be a big plus for the product of its works properly... I think we will go back with classic beats without any integration between spaces .. :( |
@zez3 did you got an answer from support ? |
Zero answer. I imagine they're happy with the complexity challenge as it'll lead to more independent deployments and more revenue. It's still a workable situation if you're multi-tenanting things, but you definitely shouldn't give a customer any visibility of fleet. |
@yakhatape
@nimarezainia can you say more about planning or if there is active work on this? |
@zez3 as noted Fleet's space awareness is one of our priorities, details of which are currently being worked on including how these assets will be constrained to a specific space. |
@colin-stubbs I apologize for the difficulty you are having here. Please don't assume there's any malice for the sake of revenue generation. We care deeply about how the product is being used and endeavor to address all these popular deployment use cases. Supporting spaces is a priority for us so we hope to provide a timeline for its support. thank you. |
No stress on my part anymore, I'm no longer operating a business that needs it. The "we care deeply" part is hard to see... engagement here on GitHub is hit and miss, as you can see above with a bunch of people echo'ing that they have hit the same kind of issues, with zero acknowledgement or response from an Elastic employee until today. Even when paying for a commercial subscription via Elastic Cloud, interacting with Elastic Cloud support gets very poor responses to issues like this. And that support team seems to be quite disconnected from Elastic development from my experience trying to log issues with them. Though it's not really a surprise, as that support team has responsibility and a focus on keeping Elastic Cloud deployments running. So there's a huuugggeee gap that needs be filled in terms of providing paying customers the ability to actually log problems or feature requests in order to get those issues onto a relevant Elastic teams radar. |
@nimarezainia do you have any target date about it ? Weeks , months , years ? Or maybe a "roadmap" about it to share with us? |
Pinging @elastic/fleet (Team:Fleet) |
Hi all. I'm the tech lead on Fleet + Integrations, and I'd like to provide as much of an update as I can here. First of all, I want to say thank you for the diligent reporting and detailed comments explaining Fleet's various deficiencies, quirks, and undocumented behaviors around multi-space support in Kibana. It's encouraging to see users providing detailed, well thought out feedback in public. We appreciate it tremendously. Second, I apologize for the lack of movement and communication on this issue in particular. Fleet's multi-space story has been discussed at length internally several times before, and there have been upstream projects in Kibana that we on the Fleet/Integrations team have been waiting on to begin making meaningful improvements in this area. Of course there will always be competing priorities as well (see our new serverless offering, Kafka support for Fleet-managed Agents, secrets support for integration variables, and many more features we've delivered in the last year). Still, there's no excuse for ghosting our paying customers on this issue. We absolutely should've communicated these upstream blockers and changes in priority better to our community members and users. Why do we have public GitHub repos if not to provide transparency and direct insight into Elastic's engineering decisions and processes? Treating improvements in this area as blocked without transparently and explicitly relaying that decision to our open source community was a miss on our part. The largest blocker to a better multi-space story for Fleet: Kibana needs to have first-class support for "shareable" assets across spaces. "Assets" are things like dashboards, saved searches, visualizations, data views, tags etc. Essentially anything an integration can ship that gets created and referenced in Kibana when a given integration is installed. This is a very large undertaking, as historically the main means for sharing asset content across spaces has been basic duplication. Copying and pasting "managed" assets like those Fleet manages for integrations is a bad experience, though, because those copied assets are left behind when packages are altered or upgraded. The "shareable" content project is a massive undertaking that spans multiple teams and many core pieces of Kibana. Dashboards are essentially collections of many other asset types, and each of those asset types needs to be updated to be shareable using new sharing APIs and paradigms. The Kibana presentation team is hard at work on this, and you can track their progress on this large meta issue: elastic/kibana#167901. This is not to say that the Fleet team is entirely blocked on making incremental improvements for users making heavy use of spaces today. We can improve docs and fix bugs to mitigate the headaches we see with multi space environments today. The detailed reports in this issue and in support cases that we've seen in the last few months will inform those fixes, and we've already been making an effort to start patching up these smaller issues while we wait on the larger efforts above, e.g.
Moving forward, we will use this issue as a communication hub for improvements and movement on Fleet's multi-space story. Additionally, I will take it on as an action item to coalesce the detailed feedback and issues documented here and in several recent support cases into actionable issues and similar bugfix style PR's. There's no sense sitting on our hands waiting for the shareable dashboard initiative to complete when we can make meaningful improvements for our multi-space users here and now. Look here for comments like this one where we collect recent bug fixes, new issues, and docs updates related to multi-space behavior in Fleet in the coming months. Now for a few things from the comments above that I can not answer, however:
The product folks are hard at work on our product roadmap for the upcoming calendar year. As we establish requirements and specifics in the coming few weeks for the multi-space efforts related to Fleet we'll know more. I think a safe bet would be "definitely not weeks, and probably not years" though I know that's quite noncommittal. I'll let Nima give a better answer when he's able.
This is valuable feedback @colin-stubbs, so thank you for that. I'd be curious to hear more specifics here if you're willing and I'm sure someone from the support team would be eager to hear as much feedback as you're willing to give as well. @lucabelluccini sorry to volunteer you, but you're typically my go-to support resource 😅 - if there's someone better on the support management side feel free to ping them here as well. I don't want this feedback to go unheard by the right people who can take action. Thank you all for patience and feedback along the way. I hope this has been helpful, and please feel free to reach out with any questions or further feedback. |
Fantastic response @kpollich , thank you. |
Hello 👋 We are aware Fleet Integrations and Kibana Spaces support is limited.
We have few Known issues too in our support portal (https://support.elastic.co/knowledge/f93188c4, https://support.elastic.co/knowledge/8c2e1720, https://support.elastic.co/knowledge/41de32b8). As @kpollich said, this feature is dependant on Kibana "platform" (elastic/kibana#167901). As members of support, we usually raise enhancement requests on customer behalf to the product managers when there's a gap in the product or to report the importance of a I would invite @colin-stubbs to open a support case so we can follow up on the issues and concerns with the Support team to see how we can improve. Thank you! |
@kpollich thanks for the reply on this. If I'm reading this correctly most of the response is around sharing objects between spaces in a way to eliminate duplicate assets and enable update-one-consume-many. Can you please confirm if anything is also in the works (sorry if i missed it) to make Fleet and Spaces more tenant aware? If I'm reading the original issue statement correctly the first part talks about the lack of separation between spaces which makes securing tenants difficult if not impossible. We, too, were stymied by this challenge and it prevented us from rolling out Fleet to a many-dozen team implementation because we could not have them stepping on each others policies, let alone have access to privileged account information. So for a multi-tenant environment (even internal to a company) is there progress anticipated in that through the other PRs that you've mentioned? Or is this still a mostly unaddressed issue? Thanks for your time and engagement to this group! |
Hey @mgevans-5, thanks for reaching out.
You are reading the above correctly, I was broadly speaking about sharing objects between spaces. As far as making Fleet more tenant-aware or supportive of multi-tenant environments: yes we have a large project in the works for achieving this within Fleet. I can't share much, as the project is still being defined internally, but it is something we've seen from many many customers (especially security use cases) since Fleet's initial release. I'll summarize some of what we have scoped out internally so far just for transparency:
Something I'll note to avoid confusion: "agent policies" as a data model are parents of the "integration policy" model that contains a given "instance" of an integration. So, broadly speaking, access control related to agent policies will also apply to integration policies as they're child objects of a given agent policy. There's more to be defined, but I think these broad requirements touch on the pain points we see in this thread. Once we have some public issues with the actual committed scope and implementation details for the multi-space effort we'll make sure to link them up here. At that point it will probably make sense to close this issue and move over to the public tracking issue for that as well. For now, I've spent some time gathering common issues and pain point related to Fleet and multi-space installations both from public issues and internal support cases. I got a very rough start on a tracking issue for incremental improvements we can make to ease those pain points while we define the broader multi-space effort: elastic/kibana#172964. I've pulled a few of the issues there from the discussion here, but if there are any others we're happy to discuss them as well. As always, minimal reproductions are greatly appreciated (the description here is a perfect example). |
@mgevans-5 just curious whether adhering to spaces n Fleet and Integrations would be satisfactory for your use case? Are you expecting spaces, as a segregation of data concept, to also apply to your data? for example at the datastream level? |
Hi @nimarezainia Whether the datastreams need to be space specific we're not concerned at this point. The metadata filters have worked to date - however we've been looking at altering the index/stream configuration now that a proliferation and larger number of indexes/streams is less of a threat to the cluster health and memory consumption. This may allow us to create space specific datastreams during ingest instead of relying on field filtering. |
thank you @mgevans-5. Would it be acceptable if all integrations that get installed, in context of a space, inherit the the space name as their Namespace and that way you get segregation at the datastream level? for example, you are in space "coke" and install nginx. We fix the namespace in the nginx integration to "coke" and therefore your access logs would be at What @kpollich eloquently described above is about Fleet becoming space aware to ease some of the operational headaches we encounter. It doesn't apply to all the elements within the Elastic stack at this point. I am curious if this step (making Fleet/integrations space aware) would be useful for your deployment? |
Hi all, From my perspective data segregation has never really been an issue, Elasticsearch roles and index patterns and even field based limitations are sufficient. It would be nice if this was more intuitive as part of space based configuration or role definition, but the capability is there. For each instance of a integration policy the namespace is configurable, which results in a unique datastream for each space. For instance, I have "default" above, but that could also be "customerx". So, instead of "-default" at the end, if I configured one integration policy with "customerx", and another integration policy with "customery", I would get datastreams like the above but with "-customerx" or "-customery" on the end. You can then use role based index patterning to restrict you "tenant" aka. space users to only seeing data in indices/datastreams that match "-customerx", which will result in what you want achieve. e.g. a really basic example, more complexity is actually required for production, I just can't remember the full details of what it was, but you get the idea hopefully. So, as above, custom roles will provide data segregration based on index patterning, but if you think that extends to spaces and try to treat spaces as equivalent to a tenant then you'll come unstuck. The issue, has, and still is as per my original description, e.g. that Kibana doesn't understand or enforce RBAC to Kibana based configuration elements in the same way that Elasticsearch does to indices. e.g. the issues again,
This is the main drama that people are facing and where the biggest room for improvement exists. If you assign Fleet access in a Kibana role, it has to be to "All Spaces", there is no granular per-space RBAC that can be applied. This means that if someone in "customerx" gets assigns Kibana privileges to use Fleet, they CANNOT be restricted to seeing installed integrations and configuration integration policies in "customerx" only, they will get everything. They will wind up seeing everything in "default", "customerx" and "customery". The "Integrations" option in the roles doesn't seem to do anything at present. It's just this Fleet option that controls everything, This also means a "customerx" user, who should never see or know about other spaces, will be prompted to select between "default", "customerx" and "customery" space when they login... which is at the very least less than desirable. This also means that a "customerx" user can then open an integration policy for "customery", and extract details from it, including and not limited to secrets such as API keys. e.g. This "customerx" user, should not be seeing this integration policy at all.. but they can, which means they can get to the JWT. The reason someone want to be able to give customers the ability to see and interact with their own integrations and integration policies, is from my experience, two-fold:
At the moment there's zero granularity to this and it's non-obvious if not impossible to deal with it. This is still the case in 8.11.1. What's the issue? Well... I suspect that it's mostly just that all of the Fleet related configuration, for all spaces, is simply in hidden system indices that make more granular based access kinda hard? e.g. these is where fleet is actually configured and all of the integration policy config exists... Splitting integration policy configuration out into separate indices per space might be at least part of the solution so that index based patterning (or a similar approach behind the scenes in Kibana) can be applied?
This means dashboards and other assets are only ever installed in the space the integration is first installed via, and for any other spaces where the integration is being "used" we have to manually copy them to other spaces. This also means that updates to official integration dashboards and other saved objects also don't automatically get applied to anywhere else except the space that the original assets are in. See previous description at the start of this issue. Nothing's changed with this as of 8.11.1 as best I can tell. The ability to "install" the integration to multiple spaces and have assets auto-update in them is what people want here. It might be more useful to think of integrations as being "installed" to the deployment, but "used" or "not used" in a space. e.g. this is not useful. e.g. I can do a "copy to space" individually on EACH AND EVERY COMPONENT, BUT THIS IS NOT USEFUL. The first one will copy fine, But now there's more conflicts and I have to do more clicky clicky. e.g. manual export and import required to get these into space "customerx" is the least frustrating way, But STILL, from Fleet's perspective, the integration is NOT INSTALLED IN "customerx". Hence those copied saved objects will never be updated as part of an integration update. |
I actually use the before mentioned "tenants" example (in my case are internal/external teams/institutes) space to role(based on namespace) RBAC for my productive environment. I also limit the access via I also tag all of my Agents in Fleet with a specific tenant tag. I was thinking that we could use that tag and allow a similar RBAC in Fleet. This could fix the issue of tenants own Agents. I mean for read-only status, not full access. I am also hitting the integration issue:
Indeed, this is what we will eventually need. Now to gain/give write permissions for tenants... Edit:: Now there is: elastic/kibana#173404 |
@zez3 I'm 99.999% certain you can work around the data view chicken/egg problem by just creating the data view via REST API. The Kibana webUI enforces the existence of matching indices/datastreams, however the REST API at /s/{{space_id}}/api/data_views/data_view does not. Doco here: https://www.elastic.co/guide/en/kibana/current/data-views-api-create.html FYI - The way I was automating onboarding of new "tenants" was via a Postman ( https://www.postman.com/ ) collection that created all of the necessary roles, users, Fleet integration policies etc etc etc, including data views and adjusting Kibana advanced setting customisation. This all happened prior to any indices/datastreams being created in any way. It'll remove most of the risk of human error and lack of consistency if you approach it that way, and can be fully integrated from a new customer self-service signup thru to emailing them tenancy details if you've got an automation platform and/or git with CI/CD style pipelines being triggered in the background. |
Hi folks, I appreciate the extremely detailed discussions here, and I assure you we have no plans to ignore any of this - in fact the detailed walkthroughs and screenshots in the comments above are greatly informing our definition of the ongoing multi-space Fleet project we're looking to deliver in the near future. That being said, I am closing this issue in favor of elastic/kibana#175831 as the actual issues described here are related to Kibana, not the integrations repository. In an effort to place the discussion as close as possible to the code to which it relates, I've created the issue above as a centralized hub for future discussion around multi-space usage of Fleet/Integrations. I've also provided extensive context, and will provide continual updates as we make progress on first-class multi-space support in the coming months. Thanks for your continued patience and thanks again for so much productive discussion on this issue. |
Issue 1
Fleet integrations and policies are not constrained to a space, e.g. all integrations and all policies are all available to any user who has sufficient privileges to see them, in all spaces. It does not matter whether they're in "space1", they will always see all integrations and all policies created in "space1" and in "space2".
This is confusing, and limits the usefulness of spaces, it also bypasses intended RBAC and space access restrictions revealing information to users who should not be able to see integrations in other spaces.
This also means they can access configured integrations and access credentials and other secrets (API keys etc) that they should not be able to access.
Kibana/Fleet should understand and respect the relationship between individual integration instances and the space they have been deployed in when multiple integrations across multiple spaces are used, only showing the relevant integrations installed within the current space as installed. Integrations deployed in other spaces should not be seen by a user working within a space.
Issue 2
A direct effect of issue 1.
Integration assets are only deployed in the first space the integration is first deployed in.
It is not possible to redeploy integration assets in other spaces, except by manually copying them between spaces from the "Management" -> "Saved Objects" page.
Or by exporting and reimporting saved object NDJSON files.
Kibana/Fleet should identify missing assets, and permit installation and reinstallation (overwrite) of all Kibana assets for an integration, into any space, if they do not currently exist as a Saved Object in the current space the user is working within.
Issue 3
Combination of issue 1 and 2.
Using the Palo Alto Networks Logs integration as an example.
If I have 2 x integration instances, both on v1.6.0 and v2.1.0 has just been released.
The first integration is in space ID "space1" and the second integration is in space ID "space2"
If I use the upgrade function in Kibana/Fleet webUI, it will upgrade the first instance of a deployed integration to v2.1.0, in "space1"
But it will leave the integration in "space2" on v1.6.0 and it will not upgrade it.
At this point there are no options in the webUI to manually upgrade the v1.6.0 integration that has been left hanging, regardless of whether I attempt this from the "default" space, or "space1" or "space2".
There is no documentation available on how to correct this, and no obvious API endpoint to trigger upgrade of the integration.
The only option seems to be to delete the v1.6.0 integration and recreate it.
This also appears to mean that the ingest pipeline for v1.6.0 no longer exists, as the only integration pipeline in existence at this point is the v2.1.0.
I believe this means that logs will be lost by agents assigned the v1.6.0 integration.
The text was updated successfully, but these errors were encountered: