-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add assistant experiment #4
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome.
as it seems wise to not mix our experiment code, even where we repeat ourselves
👍🏽 makes sense to go with this approach until/unless we explicitly want to build up any shared library code.
@@ -1 +1,4 @@ | |||
OPENAI_API_KEY=REPLACE | |||
|
|||
# For Experiment 02-assistant | |||
ASSISTANT_ID=REPLACE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question (non-blocking): This is more of an identifier than a secret, yeah? Should we consider hard-coding a default here, or does it make more sense to leave this blank to allow devs to iterate in parallel without conflicts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is more of an identifier than a secret, yeah?
Exactly. Just config. I wasn't sure if there is any way it could be abused, and didn't want to check into the repo without being more confident (I would imagine its likely fine and doesn't work without our API key).
or does it make more sense leave this blank to allow devs to iterate in parallel without conflicts?
Yeah, this was also part of the reasoning.
Happy to remove if we would like to keep .env
purely for secrets. Not strongly opinionated here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense this way
let run = await openai.beta.threads.runs.createAndPoll(thread.id, { | ||
assistant_id: assistant.id, | ||
instructions: | ||
"Respond like you work at artsy.net. Always provide a list of artists and include the link to their profile. Always check artsy before making a recommendation.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question (non-blocking): Does the "Always check artsy" bit force any kind of live web-browsing behavior? Does the assistant have that capability? (Wasn't clear to me that it did, from the create
statement above.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is more an artifact of my testing different instructions and trying to see if I can force it to use one of the tools. There was no noticeable difference in the small number of test prompts, however, and probably can be removed.
|
||
if (run.status === "completed") { | ||
const messages = await openai.beta.threads.messages.list(run.thread_id) | ||
for (const message of messages.data.reverse()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question (non-blocking): Why reverse
the messages here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This block came from their docs, so not 100% but would guess is that by default the messages come in reverse chronological order, which would be confusing when printed out.
.arguments || "null" | ||
) | ||
|
||
console.log(`Calling function: ${name} with args: ${JSON.stringify(args)}`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea 👍🏽
This PR adds an experiment for OpenAi's "assistant" feature. I just dropped in the function calls from #2 as it seems preferable to not mix our experiment code, even where we repeat ourselves.
Some general observations and notes about assistants surfaced through this experiment will be 🔒 here.
Considerations
The output is a expectedly poor here as this experiment is not trying build the best response possible, but rather spin up an assistant that is making reasonable judgements about which of the tools to utilize and which args to use.
Examples