-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DISCUSS] Alerting + Security #36836
Comments
/cc @bmcconaghy @peterschretlen @mikecote Apologies if these are already things that you all have thought through, I don't want to be stepping on anyones toes. |
A few (random) thoughts:
|
Few notes from me as well:
|
After discussing with @kobelb I now understand where he's coming from for having multiple types per saved object. It comes down to feature controls and removing scheduled alerts from API responses the user shouldn't see. (ex: Disabling APM should hide APM scheduled alerts from the user). The same should happen to history, one type per application. We can automate this process within the alerting code so developers don't need to know about the multiple object types and this would also satisfy the feature controls requirements without having to change security code. We also covered that ideally object level security would be a perfect example for this but the feature won't be ready on time for us to use. |
@jkakavas interested in your input re: the "Which "identity" do we use for the background job?" question. |
Assuming you mean Elasticsearch ABAC, I wouldn't encourage that path. It's a platinum feature and I don't think you want to leave this wide open users on Gold or Basic. If you are storing things with mixed access policies into a single index, then I think the only feasible option is to use the kibana system user to read from the index and then implement the necesary checks in Kibana server. |
Thanks for taking a look at this @tvernum . Do you have any thoughts on the subject of "Which "identity" do we use for the background job?" We want to make sure that a user cannot create an alert that can access data the user herself cannot, but also want to avoid the situation where a user creates a lot of mission critical alerts and then leaves the company and the alerts all break. I think we are leaning towards running the ES queries for an alert as a service account but with an API key that has been limited down to mirror the permissions the user who created the alert has. |
Given that we both want to:
the solution of getting an API Key as the service account but passing the roles of the authenticated user seems like a good compromise and IIRC this is the best solution we came up with during the GAH session too. This leaves a few outstanding items to consider:
|
@jkakavas @tvernum @mikecote It sounds like we have a general consensus that it's an OK compromise to run the alerts as a system user but pass the roles of the logged in user in when we obtain the API key. I think it would be an OK user experience to ask the user when the alert is created how they want the alert to run (as the logged in user or as a system user with the logged in user's roles) so long as we can provide enough context in the UI that the creator can make the right decision for the business purpose of the alert that is being created. Regarding the two points Ionnais raises, I think we will need to store a piece of metadata on the alert that tells us what user and roles the alert was created with, and then do 2 things: 1. Wire into the Kibana user UI a task to check the API keys for a given user to make sure they match the changes that were made, and to regenerate them if not. 2. Run basically the same process on some interval so that if roles change for a user not through the UI we eventually synchronize the API keys with the current roles. For the second point I think the choice in the UI for which user the alert should run as will suffice. All of this leaves us with one still unaddressed issue: we need API keys to be a basic feature (alerting itself will be basic) and we also need TLS not to be required for using API keys. If TLS remains a requirement for API keys, then what that means is that users can't use alerting without TLS, and by extension, it also effectively means that they can't use any solution that relies on alerting for full functionality (uptime, SIEM, APM, etc.). We are hoping to deliver some subset of the features of alerting for 7.3. Can the ES security team commit to delivering API keys in basic and the relaxation of the TLS requirement for 7.3? |
I disagree with this. We shouldn't allow end-users to decide the context of their alert because as soon as we allow free-form alerts, we open up the possibility of information disclosure if the internal Kibana server user has more privileges than the user creating the alert itself.
This will only be possible for alerts created by the logged in end-user. We need the user to be actively logged in to generate the API keys. |
@kobelb The proposal for running as the Kibana system user would include limiting the API token to the roles that the logged in user has, so I don't think we're adding additional risk in doing this (unless I am missing something). |
I'm not sure how this would work per create api key docs
This would require the internal kibana server user to have all of the roles of the authenticated users, which isn't really possible. |
Sorry, I've had this tab open for a few days, but haven't had a chance to put down all my thoughts.
I'm curious about how we might do this. Is it based on Kibana Feature Controls for the type of alert that is being displayed? I ask because I think it ends up being somewhat related to the identity questions. There's a fundamental question about who owns an alert - are these system owned, or user owned, and I wonder whether we ought to be clear in our thinking about that before we decide on the executing identity.
The fundamental constraint we have is that we have no way of refecting a change in a user's role-assignments in their stored jobs. This is true for all the solutions that we have used in various features, (with the exception of storing a username+password, which has other problems). The root cause is that for many authentication schemes we have no method of determining when a user's roles change. For example, in SAML & PKI we can only resolve a user's roles when we have their metadata, and we don't have a mechanism for obtaining that metadata outside of a successful authentication. So if a user never logs in, we don't know that their roles have changed. We can (depending on implementation) reflect changes in role definitions for the resolved set of roles. That is, we woudn't know that Bob has had the "Admin" role added or removed, but if Bob has the "Admin" role when we create the job then it's theoretically possible to detect that the "Admin" role has been granted new privileges.
For the same reasons we don't track changes in a user's assigned roles, we also don't track when users are disabled. This is an even harder problem than tracking role changes - if SAML user is disabled in the Identity Provider, then they will never login to ES again, so we will never know that they have been disabled (or removed from the "ES users" group, or whatever might imply that their access has been revoked).
I don't really like this solution. I'm not absolutely opposed to it, but I'd prefer we didn't do it. Firstly, it just feels wrong. It's moving API keys under a system account out of a concern that the keys might get revoked if they're under a user account. But it's not really an accurate representation of what's going on. This is a user job, acting with a user's privileges, I'd prefer we model it that way. If people want their jobs to run under a service account they should create a service account and login as that user when they create jobs. Secondly, it requires that Kibana is able to create API keys under some superuser account (Because the user running the job might be a superuser, this privileged account can't be anything less than that). That implies Kibana is effectively superuser - because if you have the kibana user+password you can create an API key with superuser privileges (per @kobelb's comment above). Thirdly, it means we can never have a job running with lower privileges than the user that created it. One of the purposes of the "limiting roles" on an API key is so that you can create keys with exactly the privileges you need to do the job, and nother more. While I don't expect the Kibana alerts framework to have that feature day-one, it would be nice if we could offer it down the track. Fourthly, the API doesn't actually support this right now. You can't pass in the user's roles because you don't have the user's role definitions - you only have the names. We'd have to build an extention to the Create API Key endpoint that took role names instead of role definitions. Finally, it makes it hard for the cluster admin to "revoke user with extreme prejudice". If the user can create a job and hide the API keys under a system account, then how does the cluster admin terminate everything that user created? They would expect that "revoke user's API keys" would do that, but it won't.
For reasons state above, I don't think it is technically feasible to do this unless that user is actually logged in to kibana. You cannot get the roles for a non-native user unless they are logged in.
I'll respond to these on the separate thread. |
@tvernum: @kobelb and I had a lengthy discussion on the subject of run as identity and API keys and basically reached the same conclusions that you have above (I think). What we settled on was the following:
Brandon, please correct anything I'm misrepresenting here. |
Kibana alerting is going to be built using API Keys, and should be permitted on a basic license. This commit moves API Keys (but not Tokens) to the Basic license Relates: elastic/kibana#36836 Backport of: elastic#42787
Kibana alerting is going to be built using API Keys, and should be permitted on a basic license. This commit moves API Keys (but not Tokens) to the Basic license Relates: elastic/kibana#36836 Backport of: #42787
Closing this out as we're no longer actively using this to discuss. |
Stack Monitoring's Alerts
The stack monitoring team would like to create their own alerts which run "automatically" to do things like create cluster alerts.
Saved Object Security Model
The following largely assumes that alerts themselves will be "saved objects", and as such will have to abide by the "saved objects security model". If this is an invalid assumption, this section is largely invalid but we'll want to discuss alternate plans.
I'm assuming that they'd like for these alerts to be "space agnostic" so that they can be visible regardless of the space that the user is in. For an alert to be "space agnostic" this is specified per "saved object type" like the following:
kibana/x-pack/plugins/xpack_main/index.js
Line 98 in 03cef22
Additionally, we'll likely only want users of monitoring to be able to see their own monitoring related alerts. To enable this, we'll likely want to implement a dedicate "monitoring alerts saved object type" and add this to the list of saved object types which users have when they have monitoring:
kibana/x-pack/plugins/monitoring/init.js
Lines 70 to 73 in 03cef22
Where do we store the results?
We'll only want the results of the alerts themselves to be visible by those with access to monitoring. We could potentially store the results in an index which only the
monitoring_user
has access to, or we could store these in a dedicated "saved object type" themselves. I don't know if there's any infrastructure we should put in place to make this easier for all of the various applications consuming alerting.Which "identity" do we use for the background job?
A large amount of the discussion which I've heard regarding alerting assumes that we'd like the alerts to run in the context of the user which scheduled them. I don't know if this is true for monitoring's use of alerts, and it feels like we'd want this to run under the identity of the Kibana internal server user, or another dedicated service account.
Generalizing stack management's requirements
It sounds like each "consumer" of alerts will likely be creating their own "alerting type" which will be able to choose between the following:
and we'll have to create a dedicated "saved object type" for each. Thoughts?
The text was updated successfully, but these errors were encountered: