-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stop() does not turn off server-side stream #577
Comments
Hey there @ekatzenstein, I had wondered about this also, but I do not see an easy way forward on it. There really is no OpenAI API call to cancel the request midstream. If we think about it, OpenAI doesn't return anything like a transaction code that we could then send back another request and say "kill this completion." There is a Completions parameter for The utility of a stop() on the client is obvious even without the provider integration. Interestingly, there are cases where one would not want a client to stop a server completion even if it was possible. So, I think if it was possible to achieve this functionality, it would make sense to implement it as having a union typed option like Finally, I do think that the overlap between this |
I'll look more into options on OpenAI's side (and other providers), thanks. Continuing the stream after front end I agree with design ideas around stop vs. cancel — though it may be confusing because OpenAI uses |
@ekatzenstein I've tried reproducing the issue, but for me, the OpenAI server stream stops when I abort (stop) the on the client. Here is the code based on https://github.com/vercel/ai/tree/main/examples/next-openai page.tsx with stop button 'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, stop } = useChat();
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
<button onClick={stop}>Stop</button>
{messages.length > 0
? messages.map(m => (
<div key={m.id} className="whitespace-pre-wrap">
{m.role === 'user' ? 'User: ' : 'AI: '}
{m.content}
</div>
))
: null}
<form onSubmit={handleSubmit}>
<input
className="fixed bottom-0 w-full max-w-md p-2 mb-8 border border-gray-300 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={handleInputChange}
/>
</form>
</div>
);
} route.ts with stream logging // ./app/api/chat/route.ts
import OpenAI from 'openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';
// Create an OpenAI API client (that's edge friendly!)
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY || '',
});
// IMPORTANT! Set the runtime to edge
export const runtime = 'edge';
export async function POST(req: Request) {
// Extract the `prompt` from the body of the request
const { messages } = await req.json();
// Ask OpenAI for a streaming chat completion given the prompt
const response = await openai.chat.completions.create({
model: 'gpt-3.5-turbo',
stream: true,
messages: messages,
});
// Convert the response into a friendly text-stream
const openaiStream = OpenAIStream(response);
const loggedStream = openaiStream.pipeThrough(createLoggingTransformStream());
// Respond with the stream
return new StreamingTextResponse(loggedStream);
}
function createLoggingTransformStream() {
return new TransformStream({
transform(chunk: any, controller: TransformStreamDefaultController) {
// Log the chunk
console.log('Received chunk:', chunk);
// Pass the chunk along unchanged
controller.enqueue(chunk);
},
});
} When you abort on the client, the server stream stops: Please note that if you use See also #90 |
Using example from
openai.createChatCompletion
, running thestop()
function from theuseChat
hook will discontinue the front-end hook, but the back-end will remain active, and the openAI query continues to use up tokens.The text was updated successfully, but these errors were encountered: