-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[actions] adds a connectors_networking plugin #94807
Conversation
- integrate with actions agent, modify tester to test via webhook - get a test with rejectAuth true/false working - yay - add some doc on the proof of concept - end-to-end test with ca works - update poc with note about kibana config
f68d90f
to
b74d16d
Compare
💔 Build Failed
Failed CI Steps
Test FailuresKibana Pipeline / jest / Jest Tests.x-pack/plugins/actions/server.Actions Plugin setup() should log warning when Encrypted Saved Objects plugin is missing encryption keyStandard Out
Stack Trace
Kibana Pipeline / jest / Jest Tests.x-pack/plugins/actions/server.Actions Plugin setup() routeHandlerContext.getActionsClient() should not throw error when ESO plugin has encryption keyStandard Out
Stack Trace
Kibana Pipeline / jest / Jest Tests.x-pack/plugins/actions/server.Actions Plugin setup() routeHandlerContext.getActionsClient() should throw error when ESO plugin is missing encryption keyStandard Out
Stack Trace
and 15 more failures, only showing the first 3. Metrics [docs]
History
To update your PR or re-run it, just comment with: |
Some thoughts after reading through the POC document:
|
Just assigned @legrego and @jportner to review - this is still at a proof-of-concept state, we're looking to see if the direction we're going is "safe" and also sufficient enough for now. There's not much (interesting) in the code itself - we'll need to nail down all the logic of when to apply the various options, etc (including usages with proxies). The direction is hopefully well-enough described in the document Note that the original thought was to use ESO's to persist these networking options, and build a full managment UI for it, but I realized late we can probably get by using kibana config instead (at least for now, with no client certs/keys being part of this yet), and will probably go down that route to get this out the door sooner. Assuming that's "safe" enough and actually works :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a pass through the poc.md
doc, but haven't reviewed any code at this point. Interested to get Joe's thoughts on this as well once he's had a chance to review.
Many of my comments revolve around the UI-based saved object approach, which seems like we're going to defer anyway. I left them regardless, because it'll serve as a reminder for myself the next time we revisit this topic 🙂
tls: schema.nullable( | ||
schema.object({ | ||
// below from https://nodejs.org/dist/latest-v14.x/docs/api/tls.html# | ||
reject_unauthorized: schema.nullable(schema.boolean()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little nervous coupling the configuration options to a specific version of node.js. The more of these settings we support, the more risk we take on with each node.js upgrade. If Node decides to change the way these parameters behave, then we'll need to figure out a migration path, less we introduce a breaking change. We've historically been able to jump major versions of node.js in our own minor releases without much fanfare, but if that ends up introducing a breaking change in any of these parameters, then that would complicate our upgrade process.
To put this in the form of a question: do we need to support all of these options out of the gate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, I don't think we need to support all of these. My gut feeling is that reject_unauthorized
and ca
are all that anyone would realistically need to be able to configure (though verificationMode
may be better than reject_unauthorized
, see my comment further down #94807 (comment)).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little nervous coupling the configuration options to a specific version of node.js
The URL reference is perhaps misleading, with the version # - I just wanted to point to some doc describing the options. At least some of these APIs have been around since v0.3.2
I did wonder about perhaps using spec'd names for these from whatever RFC or whatever they are spec'd in. I could look those up. They may be worse :-)
Do we need to support all of these options out of the gate?
But that's my question to you! Of the options we have listed, I suspect the rejectUnauthorized
and ca
get us 95% of the way there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But that's my question to you! Of the options we have listed, I suspect the
rejectUnauthorized
andca
get us 95% of the way there.
From my perspective, I agree -- rejectUnauthorized
and ca
are the two most important settings that we need to support. I posed this question to you because I wasn't sure about your specific requirements (e.g. if you had a number of ERs asking for cipher or DH curve configuration).
Supported TLS versions might be the next most important setting, but I don't think it's necessary to include that in this first phase. We will end up using the most recent TLS version that's mutually supported, which will work perfectly fine for a majority of use cases.
I do expect that we'll have scenarios where we're asked to support an older version of TLS, but I'd personally not cater to those setups unless we have to.
reject_unauthorized: schema.nullable(schema.boolean()), | ||
min_dh_size: schema.nullable(schema.number({ min: 0 })), | ||
// below from https://nodejs.org/dist/latest-v14.x/docs/api/tls.html#tls_tls_createsecurecontext_options | ||
ca: schema.nullable(schema.string({ minLength: 1 })), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kibana supports CAs in both the PEM
and PKCS#12
formats. We don't have to necessarily support both of these immediately, but it'd be good to track this to see if there's interest in supporting both. PEM
is likely the default, and what you're expecting here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you have Kibana configured to use a PKCS#12 keystore, it extracts the PEM certificate(s) when the config is loaded. I suspect the best thing to do for this enhancement (if we build a UI) is to allow a user to upload either format, but convert to PEM format before saving the object. In which case, this structure is fine!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I've been playing with PEMs - I'll see what the PKCS support is like. Speaking of, I was looking for the source of kibana_keystore (is that the name), and couldn't find it. Was wondering if it should somehow be involved here, but I think I saw the references were mostly about saving keys / passwords. And wondering if there were things like PEM parsers that we could use to extract a name or such for these, if we needed to. npm seems to have some that all shell out to openssl :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the Kibana keystore is for saving secrets, which isn't applicable here IMO. Certificates are not secrets that need to be protected.
As far as PEM parsing is concerned, we already have a dependency on the node-forge
package (to parse PKCS#12 files), which also supports X.509 operations. Something like this should work:
import { pki } from 'node-forge';
const cert = pki.certificateFromPem(pem);
const { subject } = cert;
|
||
### CNO's are space agnostic | ||
|
||
For our current usage, it's not clear that we would need space-specific CNO's, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When thinking about these types of problems, I often consider the "multi-tenant" use case:
If I am a customer of some company, and that company grants me full access to a larry
space (and nothing else), then what would we want that experience to look like for me? If the SNO is space-agnostic, then any changes that I make to connection options are going to impact all other tenants (spaces) on the install. Or perhaps I shouldn't be able to configure a CNO in the first place?
I'm not trying to steer you in a particular direction, but wanted to highlight this perspective, as it's often lost in these discussions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, thanks. I'm curious what the exposure here is, excluding access to client certs / keys. Algorithm parameters, public server cert, how much of that is sensitive?
Storing keys would force our hand on making these space aware. And then maybe sharing these across spaces would be sufficient, on an as needed basis?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Algorithm parameters, public server cert, how much of that is sensitive?
I don't think this is sensitive per se, but you wouldn't want any info from one tenant to be shown to another tenant (or, God forbid, to be deleted by another tenant!)
Perhaps a way around this, for the UI-centric approach, would be to only allow users with access to All Spaces (the global resource '*'
) to have CRUD access for these CNO objects. Then individual tenants are not affected, but an administrator could still add the CNO on behalf of a tenant if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps a way around this, for the UI-centric approach, would be to only allow users with access to All Spaces (the global resource '*') to have CRUD access for these CNO objects. Then individual tenants are not affected, but an administrator could still add the CNO on behalf of a tenant if needed.
Yeah this might be an acceptable approach. We've had a couple other features gated in a similar fashion (such as the ability to manage spaces), and it hasn't been too problematic yet. We have ERs to make this more granular, but it's not something that we have a lot of pressure to address right now.
on-prem; clould would need to support PEM's as strings of the file contents (7 | ||
bit ascii anyway!)). | ||
|
||
In fact, having written that, if we could do this, we could perhaps ship this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This certainly simplifies the implementation, but puts additional burden on the administrator. We've seen what happens with the ESO encryption key, and I suspect we'll have similar problems with yml-based configuration here.
For the record, I think a yml-based approach makes sense as a starting point, especially if we're OK with space-agnostic (global) settings. We'll just want to make our expectations clear in the docs, and provide helpful error messages when possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So if we decide space-aware ESOs in the possible future, but we have "preconfigured" global options objects now, that mirrors how we do preconfigured connectors - in specified in yaml, they're in every space, but the http api only creates space-aware connectors.
port, separated by `:` chars. Eg, URL `https://elastic.co` becomes key/id | ||
`https:elastic.co:443`. | ||
|
||
I know generally we use UUIDs as saved object ids, and especially for encrypted |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
UUIDv5 generates deterministic UUIDs, so it'd still be possible to use a UUID-based unique identifier, while still ensuring that two CNOs can't share a canonical URL.
We're using this within the Saved Objects migrator as part of the multi-namespace conversion process:
kibana/src/core/server/saved_objects/migrations/core/document_migrator.ts
Lines 847 to 856 in dc475c9
/** | |
* Deterministically regenerates a saved object's ID based upon it's current namespace, type, and ID. This ensures that we can regenerate | |
* any existing object IDs without worrying about collisions if two objects that exist in different namespaces share an ID. It also ensures | |
* that we can later regenerate any inbound object references to match. | |
* | |
* @note This is only intended to be used when single-namespace object types are converted into multi-namespace object types. | |
*/ | |
function deterministicallyRegenerateObjectId(namespace: string, type: string, id: string) { | |
return uuidv5(`${namespace}:${type}:${id}`, uuidv5.DNS); // the uuidv5 namespace constant (uuidv5.DNS) is arbitrary | |
} |
|
||
- Should these saved objects be global or space-specific or shared? | ||
|
||
- Do we need a new privilege to use the CRUD APIs on the saved objects? Or |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might be able to defer this work, but I suspect our users will want this level of control. We can accomplish this using a sub-feature privilege of Actions and Connectors
, if desired.
Let's say I'm part of a team, and each of us want to create our own webhook connector to https://example.com
(perhaps with our own credentials). An administrator might have configured the connection options for https://example.com
to adhere to my company's security requirements. Should I be authorized to update those options, and potentially influence the behavior of everyone else's connectors?
As a user making an innocent mistake, this would be embarassing. As a malicious insider, this is a great opportunity to disrupt a large number of connectors, some of which may alert on mission-critical events, such as security incidents.
|
||
- Names! connectorsNetworking? erghh. should we use actions instead? | ||
|
||
- Do we need to make the saved objects hidden? I don't think so, but worth |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This largely depends on your security requirements. It sounds like you don't need anything fancy like the producer/consumer model you have for alerts/actions, so I suspect you're right about not needing to make these hidden.
- Do we need to make the saved objects hidden? I don't think so, but worth | ||
asking | ||
|
||
- Should these saved objects be global or space-specific or shared? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have any answers here, but I'll ramble for a bit anyway:
Global or single-namespace would be easiest for you, as you could use deterministic saved object ids to ensure that there was only a single object per url. Single-namespace gives administrators flexibility, but it's also a potential maintenance burden, as they'd have to keep connectors with the same URL in sync between spaces.
If they're built as shared objects, then you can share them to all spaces from day 1, which is nearly the same behavior as global. Then if you decide to make these space-specific (or shared to a subset) in the future, you can do so.
One caveat to this, is that a sharable object won't have the uniqueness guarantee that you were hoping for. Your saved object ids would no longer be the canonicalized url
, so we'd have to come up with another way to solve that problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One caveat to this, is that a sharable object won't have the uniqueness guarantee that you were hoping for. Your saved object ids would no longer be the canonicalized
url
, so we'd have to come up with another way to solve that problem.
Well, sharable (multi-namespace) objects would be able to maintain unique IDs. Their IDs are globally unique across Kibana, just like global (namespace-agnostic) objects.
The problem is an edge case if you have an underprivileged user creating a CNO in one space, and a different underprivileged user creating a CNO for the same canonicalized URL in a different space -- each URL would have the same ID, but only the object that was created first would be able to use it. The other one would result in a 409 "unresolvable conflict" error.
If you used randomly generated IDs instead, and you want to maintain at-most-one-unique-URL-per-space a different way, you would have to prevent new CNOs from being created or shared by doing a preflight check. Sounds ugly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also only looked at poc.md
for now, comments below
reject_unauthorized: schema.nullable(schema.boolean()), | ||
min_dh_size: schema.nullable(schema.number({ min: 0 })), | ||
// below from https://nodejs.org/dist/latest-v14.x/docs/api/tls.html#tls_tls_createsecurecontext_options | ||
ca: schema.nullable(schema.string({ minLength: 1 })), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you have Kibana configured to use a PKCS#12 keystore, it extracts the PEM certificate(s) when the config is loaded. I suspect the best thing to do for this enhancement (if we build a UI) is to allow a user to upload either format, but convert to PEM format before saving the object. In which case, this structure is fine!
tls: schema.nullable( | ||
schema.object({ | ||
// below from https://nodejs.org/dist/latest-v14.x/docs/api/tls.html# | ||
reject_unauthorized: schema.nullable(schema.boolean()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, I don't think we need to support all of these. My gut feeling is that reject_unauthorized
and ca
are all that anyone would realistically need to be able to configure (though verificationMode
may be better than reject_unauthorized
, see my comment further down #94807 (comment)).
- Do we need to make the saved objects hidden? I don't think so, but worth | ||
asking | ||
|
||
- Should these saved objects be global or space-specific or shared? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One caveat to this, is that a sharable object won't have the uniqueness guarantee that you were hoping for. Your saved object ids would no longer be the canonicalized
url
, so we'd have to come up with another way to solve that problem.
Well, sharable (multi-namespace) objects would be able to maintain unique IDs. Their IDs are globally unique across Kibana, just like global (namespace-agnostic) objects.
The problem is an edge case if you have an underprivileged user creating a CNO in one space, and a different underprivileged user creating a CNO for the same canonicalized URL in a different space -- each URL would have the same ID, but only the object that was created first would be able to use it. The other one would result in a 409 "unresolvable conflict" error.
If you used randomly generated IDs instead, and you want to maintain at-most-one-unique-URL-per-space a different way, you would have to prevent new CNOs from being created or shared by doing a preflight check. Sounds ugly.
## one more thing - maybe this is just Kibana config? | ||
|
||
Since thinking of doing this all via Kibana config, it's kinda stuck in my | ||
mind. I suspect it would be good enough for most customers, and a full | ||
UI treatment probably is over-kill, if customers are likely to need just a | ||
handful (or less) of these. | ||
|
||
If we were to do this, we could follow the lead of preconfigured actions, | ||
in terms of design/implementation, in preparation if we ever had to do saved | ||
objects as well (since actions supports both). | ||
|
||
Means no UI, no HTTP APIs. Code is almost done :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the complexities for multi-tenant scenarios and the requirement for "protocol:hostname:port"-unique CNOs, I tend to agree with this sentiment. Kibana config will probably satisfy the vast majority of use cases.
xpack.connectorsNetworking.options: | ||
- url: https://localhost:5601 | ||
name: dev-time certificates for kibana localhost | ||
tls: | ||
rejectUnauthorized: true | ||
ca: | | ||
-----BEGIN CERTIFICATE----- | ||
MIIDSzCCAjOgAwIBAgIUW0brhEtYK3tUBYlXnUa+AMmAX6kwDQYJKoZIhvcNAQEL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we go the config route, it would be nice to support PKCS#12 keystores as Larry mentioned above. In that case, perhaps it would make more sense to change these keys to use the same naming conventions as the existing SSL settings for the ES client:
kibana/src/core/server/elasticsearch/elasticsearch_config.ts
Lines 96 to 128 in 321b8bf
ssl: schema.object( | |
{ | |
verificationMode: schema.oneOf( | |
[schema.literal('none'), schema.literal('certificate'), schema.literal('full')], | |
{ defaultValue: 'full' } | |
), | |
certificateAuthorities: schema.maybe( | |
schema.oneOf([schema.string(), schema.arrayOf(schema.string(), { minSize: 1 })]) | |
), | |
certificate: schema.maybe(schema.string()), | |
key: schema.maybe(schema.string()), | |
keyPassphrase: schema.maybe(schema.string()), | |
keystore: schema.object({ | |
path: schema.maybe(schema.string()), | |
password: schema.maybe(schema.string()), | |
}), | |
truststore: schema.object({ | |
path: schema.maybe(schema.string()), | |
password: schema.maybe(schema.string()), | |
}), | |
alwaysPresentCertificate: schema.boolean({ defaultValue: false }), | |
}, | |
{ | |
validate: (rawConfig) => { | |
if (rawConfig.key && rawConfig.keystore.path) { | |
return 'cannot use [key] when [keystore.path] is specified'; | |
} | |
if (rawConfig.certificate && rawConfig.keystore.path) { | |
return 'cannot use [certificate] when [keystore.path] is specified'; | |
} | |
}, | |
} | |
), |
You wouldn't need all of those obviously, but a subset of them:
ssl.verificationMode
ssl.certificateAuthorities
ssl.truststore.path
ssl.truststore.password
verificationMode
is used to derive the values for both rejectUnauthorized
and checkServerIdentity
:
kibana/src/core/server/elasticsearch/client/client_config.ts
Lines 104 to 119 in 4584a8b
const verificationMode = sslConfig.verificationMode; | |
switch (verificationMode) { | |
case 'none': | |
ssl.rejectUnauthorized = false; | |
break; | |
case 'certificate': | |
ssl.rejectUnauthorized = true; | |
// by default, NodeJS is checking the server identify | |
ssl.checkServerIdentity = () => undefined; | |
break; | |
case 'full': | |
ssl.rejectUnauthorized = true; | |
break; | |
default: | |
throw new Error(`Unknown ssl verificationMode: ${verificationMode}`); | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pointing out a better naming convention is a great way to start the weekend, thanks! Now it's your fault :-)
The only real issue is that we need a file-free way of doing this config for cloud. For example:
elasticsearch.ssl.certificateAuthorities: ["/path/to/elasticsearch-ca.pem"]
Hence, embedding the literal certs, which is ... weird, but ... interesting - no files!
Is there any chance we could somehow make use of the elastic keystore for this? Or get a kibana keystore for cloud? So a customer would have a way to upload certs + keys, and then refer back to them in config or UI.
I think for now we'll just extend the existing name: certificateAuthoritiesData
, which a cloud customer could use, and on-prem would be able to use files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any chance we could somehow make use of the elastic keystore for this? Or get a kibana keystore for cloud? So a customer would have a way to upload certs + keys, and then refer back to them in config or UI.
ESS has a keystore equivalent for secure values, but I don't think it's really applicable here.
I think for now we'll just extend the existing name:
certificateAuthoritiesData
, which a cloud customer could use, and on-prem would be able to use files.
I think this sounds reasonable!
closed in favor of #96630 |
resolves #80120
Summary
Adds a new connectors_networking plugin which will manage per-server networking options like TLS settings
Checklist
Delete any items that are not applicable to this PR.
For maintainers