Skip to content

Commit

Permalink
feat(opentrons-ai-client & opentrons-ai-server): Feedback api (#16761)
Browse files Browse the repository at this point in the history
<!--
Thanks for taking the time to open a Pull Request (PR)! Please make sure
you've read the "Opening Pull Requests" section of our Contributing
Guide:


https://github.com/Opentrons/opentrons/blob/edge/CONTRIBUTING.md#opening-pull-requests

GitHub provides robust markdown to format your PR. Links, diagrams,
pictures, and videos along with text formatting make it possible to
create a rich and informative PR. For more information on GitHub
markdown, see:


https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax

To ensure your code is reviewed quickly and thoroughly, please fill out
the sections below to the best of your ability!
-->

# Overview
Added an API to take in feedback from the AI Client.

Also added a spinner that changes between values every 5 seconds.

<!--
Describe your PR at a high level. State acceptance criteria and how this
PR fits into other work. Link issues, PRs, and other relevant resources.
-->

## Test Plan and Hands on Testing

<!--
Describe your testing of the PR. Emphasize testing not reflected in the
code. Attach protocols, logs, screenshots and any other assets that
support your testing.
-->

## Changelog

<!--
List changes introduced by this PR considering future developers and the
end user. Give careful thought and clear documentation to breaking
changes.
-->

## Review requests

<!--
- What do you need from reviewers to feel confident this PR is ready to
merge?
- Ask questions.
-->

## Risk assessment

<!--
- Indicate the level of attention this PR needs.
- Provide context to guide reviewers.
- Discuss trade-offs, coupling, and side effects.
- Look for the possibility, even if you think it's small, that your
change may affect some other part of the system.
- For instance, changing return tip behavior may also change the
behavior of labware calibration.
- How do your unit tests and on hands on testing mitigate this PR's
risks and the risk of future regressions?
- Especially in high risk PRs, explain how you know your testing is
enough.
-->

---------

Co-authored-by: FELIPE BELGINE <[email protected]>
  • Loading branch information
connected-znaim and fbelginetw authored Nov 12, 2024
1 parent 74312f2 commit 5b0c7f4
Show file tree
Hide file tree
Showing 8 changed files with 154 additions and 7 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,10 @@
"pcr": "PCR",
"pipettes": "Pipettes: Specify your pipettes, including the volume, number of channels, and whether they’re mounted on the left or right.",
"privacy_policy": "By continuing, you agree to the Opentrons <privacyPolicyLink>Privacy Policy</privacyPolicyLink> and <EULALink>End user license agreement</EULALink>",
"progressFinalizing": "Finalizing...",
"progressGenerating": "Generating...",
"progressInitializing": "Initializing...",
"progressProcessing": "Processing...",
"protocol_file": "Protocol file",
"provide_details_of_changes": "Provide details of changes you want to make",
"python_file_type_error": "Python file type required",
Expand Down
47 changes: 43 additions & 4 deletions opentrons-ai-client/src/atoms/SendButton/index.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,11 @@ import {
COLORS,
DISPLAY_FLEX,
Icon,
JUSTIFY_CENTER,
JUSTIFY_SPACE_AROUND,
StyledText,
} from '@opentrons/components'
import { useEffect, useState } from 'react'
import { useTranslation } from 'react-i18next'

interface SendButtonProps {
handleClick: () => void
Expand All @@ -21,6 +24,15 @@ export function SendButton({
disabled = false,
isLoading = false,
}: SendButtonProps): JSX.Element {
const { t } = useTranslation('protocol_generator')

const progressTexts = [
t('progressInitializing'),
t('progressProcessing'),
t('progressGenerating'),
t('progressFinalizing'),
]

const playButtonStyle = css`
-webkit-tap-highlight-color: transparent;
&:focus {
Expand All @@ -47,20 +59,47 @@ export function SendButton({
color: ${COLORS.grey50};
}
`

const [buttonText, setButtonText] = useState(progressTexts[0])
const [, setProgressIndex] = useState(0)

useEffect(() => {
if (isLoading) {
const interval = setInterval(() => {
setProgressIndex(prevIndex => {
const newIndex = (prevIndex + 1) % progressTexts.length
setButtonText(progressTexts[newIndex])
return newIndex
})
}, 5000)

return () => {
setProgressIndex(0)
clearInterval(interval)
}
}
}, [isLoading])

return (
<Btn
alignItems={ALIGN_CENTER}
backgroundColor={disabled ? COLORS.grey35 : COLORS.blue50}
borderRadius={BORDERS.borderRadiusFull}
display={DISPLAY_FLEX}
justifyContent={JUSTIFY_CENTER}
width="4.25rem"
height="3.75rem"
justifyContent={JUSTIFY_SPACE_AROUND}
paddingX="20px"
width={isLoading ? 'wrap' : '4.25rem'}
height="4.25rem"
disabled={disabled || isLoading}
onClick={handleClick}
aria-label="play"
css={playButtonStyle}
>
{isLoading ? (
<StyledText paddingLeft="0px" paddingRight="24px" as="i">
{buttonText}
</StyledText>
) : null}
<Icon
color={disabled ? COLORS.grey50 : COLORS.white}
name={isLoading ? 'ot-spinner' : 'send'}
Expand Down
53 changes: 50 additions & 3 deletions opentrons-ai-client/src/molecules/FeedbackModal/index.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,60 @@ import {
} from '@opentrons/components'
import { useAtom } from 'jotai'
import { useTranslation } from 'react-i18next'
import { feedbackModalAtom } from '../../resources/atoms'
import { feedbackModalAtom, tokenAtom } from '../../resources/atoms'
import { useState } from 'react'
import type { AxiosRequestConfig } from 'axios'
import {
STAGING_FEEDBACK_END_POINT,
PROD_FEEDBACK_END_POINT,
LOCAL_FEEDBACK_END_POINT,
} from '../../resources/constants'
import { useApiCall } from '../../resources/hooks'

export function FeedbackModal(): JSX.Element {
const { t } = useTranslation('protocol_generator')

const [feedbackValue, setFeedbackValue] = useState<string>('')
const [, setShowFeedbackModal] = useAtom(feedbackModalAtom)
const [token] = useAtom(tokenAtom)
const { callApi } = useApiCall()

const handleSendFeedback = async (): Promise<void> => {
try {
const headers = {
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json',
}

const getEndpoint = (): string => {
switch (process.env.NODE_ENV) {
case 'production':
return PROD_FEEDBACK_END_POINT
case 'development':
return LOCAL_FEEDBACK_END_POINT
default:
return STAGING_FEEDBACK_END_POINT
}
}

const url = getEndpoint()

const config = {
url,
method: 'POST',
headers,
data: {
feedbackText: feedbackValue,
fake: false,
},
}
await callApi(config as AxiosRequestConfig)
setShowFeedbackModal(false)
} catch (err: any) {
console.error(`error: ${err.message}`)
throw err
}
}

return (
<Modal
Expand All @@ -41,8 +87,9 @@ export function FeedbackModal(): JSX.Element {
</StyledText>
</SecondaryButton>
<PrimaryButton
onClick={() => {
setShowFeedbackModal(false)
disabled={feedbackValue === ''}
onClick={async () => {
await handleSendFeedback()
}}
>
<StyledText desktopStyle="bodyDefaultSemiBold">
Expand Down
5 changes: 5 additions & 0 deletions opentrons-ai-client/src/resources/constants.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
// ToDo (kk:05/29/2024) this should be switched by env var
export const STAGING_END_POINT =
'https://staging.opentrons.ai/api/chat/completion'
export const STAGING_FEEDBACK_END_POINT =
'https://staging.opentrons.ai/api/chat/feedback'
export const PROD_END_POINT = 'https://opentrons.ai/api/chat/completion'
export const PROD_FEEDBACK_END_POINT = 'https://opentrons.ai/api/chat/feedback'

// auth0 domain
export const AUTH0_DOMAIN = 'identity.auth.opentrons.com'
Expand All @@ -19,5 +22,7 @@ export const LOCAL_AUTH0_CLIENT_ID = 'PcuD1wEutfijyglNeRBi41oxsKJ1HtKw'
export const LOCAL_AUTH0_AUDIENCE = 'sandbox-ai-api'
export const LOCAL_AUTH0_DOMAIN = 'identity.auth-dev.opentrons.com'
export const LOCAL_END_POINT = 'http://localhost:8000/api/chat/completion'
export const LOCAL_FEEDBACK_END_POINT =
'http://localhost:8000/api/chat/feedback'

export const CLIENT_MAX_WIDTH = '1440px'
32 changes: 32 additions & 0 deletions opentrons-ai-server/api/handler/fast.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
from api.models.chat_request import ChatRequest
from api.models.chat_response import ChatResponse
from api.models.empty_request_error import EmptyRequestError
from api.models.feedback_response import FeedbackResponse
from api.models.internal_server_error import InternalServerError
from api.settings import Settings

Expand Down Expand Up @@ -249,6 +250,37 @@ async def redoc_html() -> HTMLResponse:
return get_redoc_html(openapi_url="/api/openapi.json", title="Opentrons API Documentation")


@app.post(
"/api/chat/feedback",
response_model=Union[FeedbackResponse, ErrorResponse],
summary="Feedback",
description="Send feedback to the team.",
)
async def feedback(request: Request, auth_result: Any = Security(auth.verify)) -> FeedbackResponse: # noqa: B008
"""
Send feedback to the team.
- **request**: The HTTP request containing the feedback message.
- **returns**: A feedback response or an error message.
"""
logger.info("POST /api/feedback")
try:
body = await request.json()
if "feedbackText" not in body.keys() or body["feedbackText"] == "":
logger.info("Feedback empty")
raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail=EmptyRequestError(message="Request body is empty"))
logger.info(f"Feedback received: {body}")
feedbackText = body["feedbackText"]
# todo: Store feedback text in a database
return FeedbackResponse(reply=f"Feedback Received: {feedbackText}", fake=False)

except Exception as e:
logger.exception("Error processing feedback")
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, detail=InternalServerError(exception_object=e).model_dump()
) from e


@app.get("/api/doc", include_in_schema=False)
async def swagger_html() -> HTMLResponse:
return get_swagger_ui_html(openapi_url="/api/openapi.json", title="Opentrons API Documentation")
Expand Down
6 changes: 6 additions & 0 deletions opentrons-ai-server/api/models/feedback_response.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
from pydantic import BaseModel


class FeedbackResponse(BaseModel):
reply: str
fake: bool
5 changes: 5 additions & 0 deletions opentrons-ai-server/tests/helpers/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,11 @@ def get_chat_completion(self, message: str, fake: bool = True, fake_key: Optiona
headers = self.standard_headers if not bad_auth else self.invalid_auth_headers
return self.httpx.post("/chat/completion", headers=headers, json=request.model_dump())

def get_feedback(self, message: str, fake: bool = True) -> Response:
"""Call the /chat/feedback endpoint and return the response."""
request = f'{"feedbackText": "{message}"}'
return self.httpx.post("/chat/feedback", headers=self.standard_headers, json=request)

def get_bad_endpoint(self, bad_auth: bool = False) -> Response:
"""Call nonexistent endpoint and return the response."""
headers = self.standard_headers if not bad_auth else self.invalid_auth_headers
Expand Down
9 changes: 9 additions & 0 deletions opentrons-ai-server/tests/test_live.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import pytest
from api.models.chat_response import ChatResponse
from api.models.feedback_response import FeedbackResponse

from tests.helpers.client import Client

Expand All @@ -26,6 +27,14 @@ def test_get_chat_completion_bad_auth(client: Client) -> None:
assert response.status_code == 401, "Chat completion with bad auth should return HTTP 401"


@pytest.mark.live
def test_get_feedback_good_auth(client: Client) -> None:
"""Test the feedback endpoint with good authentication."""
response = client.get_feedback("How do I load tipracks for my 8 channel pipette on an OT2?", fake=True)
assert response.status_code == 200, "Feedback with good auth should return HTTP 200"
FeedbackResponse.model_validate(response.json())


@pytest.mark.live
def test_get_bad_endpoint_with_good_auth(client: Client) -> None:
"""Test a nonexistent endpoint with good authentication."""
Expand Down

0 comments on commit 5b0c7f4

Please sign in to comment.