-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Security AI Assistant] Removed connectorTypeTitle from the Conversation API required params. Replaced usage replaced with actionsClient on server and API call on the client #179117
Conversation
…params, replaced with actionsClient on server and API call on the client
@@ -7,6 +7,7 @@ | |||
|
|||
import React, { useEffect, useMemo, useRef } from 'react'; | |||
import { EuiFlexGroup, EuiFlexItem } from '@elastic/eui'; | |||
import { useFetchConnectorsQuery } from '../../../detection_engine/rule_management/api/hooks/use_fetch_connectors_query'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: move to useFetchConnectorsQuery
to /security_solution/public/common/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even better would be to add a featureId
argument to useFetchConnectorTypesQuery
and pass the featureId
to fetchConnectorTypes
to use as the feature_id
argument. Then we could call like:
useFetchConnectorTypesQuery(GenerativeAIForSecurityConnectorFeatureId)
Available featureId
s:
kibana/x-pack/plugins/actions/common/connector_feature_config.ts
Lines 24 to 29 in 2759182
export const AlertingConnectorFeatureId = 'alerting'; | |
export const CasesConnectorFeatureId = 'cases'; | |
export const UptimeConnectorFeatureId = 'uptime'; | |
export const SecurityConnectorFeatureId = 'siem'; | |
export const GenerativeAIForSecurityConnectorFeatureId = 'generativeAIForSecurity'; | |
export const GenerativeAIForObservabilityConnectorFeatureId = 'generativeAIForObservability'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually maybe we should use don't do it, i tried locally and there were too many unexpected changesuseLoadActionTypesQuery
from triggers_actions_ui
. we'd just need to add an arg for featureId
and default it to AlertingConnectorFeatureId
. It could use the existing function loadActionTypes
instead of fetchConnectorTypes
x-pack/packages/kbn-elastic-assistant/impl/connectorland/use_load_connectors/index.tsx
Show resolved
Hide resolved
@@ -435,7 +435,6 @@ export default function bedrockTest({ getService }: FtrProviderContext) { | |||
message: 'Hello world', | |||
isEnabledKnowledgeBase: false, | |||
isEnabledRAGAlerts: false, | |||
llmType: 'bedrock', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should decouple this test from our GenAI code so we aren't triggering a ResponseOps review for changes to the internal execute API.
The reason this test is here is because I wanted to add a test for the Bedrock subaction invokeStream
. However, the integration tests here use the execute route, which does not support streaming. In order to call this subaction, one must implement a own route to call the actions api directly. The only places that exist are within solutions code. That is why the test calls the security GenAI execute API instead of the actions execute API.
It would be nice if the actions team could extend the execute route to support streaming. That way, we can update this test to call the actions execute API and ResponseOps will not have to review every change to our GenAI route. @ymao1 I cannot remember if there is an existing issue to add support for streaming to execute route. Is there plans for that? Maybe we could skip this test until that is implemented?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe we have an existing issue for this. Feel free to create one and we will prioritize as capacity allows or PR contributions are welcome as well!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@@ -85,7 +84,7 @@ export default ({ getService }: FtrProviderContext) => { | |||
it('should execute a chat completion', async () => { | |||
const response = await postActionsClientExecute( | |||
openaiActionId, | |||
{ ...mockRequest, llmType: 'openai' }, | |||
{ ...mockRequest }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
{ ...mockRequest }, | |
mockRequest, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Left a few suggestions but nothing critical.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Response Ops changes LGTM
💚 Build Succeeded
Metrics [docs]Async chunks
History
To update your PR or re-run it, just comment with: cc @YulNaumenko |
Current PR is fixing bug mentioned here and reducing
connectorTypeTitle
/llmType
params for AI assistant APIs.Streaming is working as it was before:
Screen.Recording.2024-03-20.at.7.41.12.PM.mov