-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Have you verified whether the abort signal is actually functioning in the edge runtime? #90
Have you verified whether the abort signal is actually functioning in the edge runtime? #90
Comments
Update: I have done some testing using the I am coming to the same conclusion. I was very excited to see the release of this library today, particularly this documentation https://sdk.vercel.ai/docs/concepts/backpressure-and-cancellation. Last month I was asked to add cancellation to a product I am working on for a client. I attempted to get this working with edge functions but could not receive the cancel callback on the server. I decided to see if I could get it working at all and was able to implement a version using Deno since it has a similar runtime API. At this point, I sent a support request to Vercel and about two weeks later it was confirmed the upstream provider, Cloudflare or AWS, I assume, did not support the abort signal and was aware of this limitation. I moved our Open AI streaming endpoints to a Fastify server running Node 20 on an AWS ECS cluster. This setup has allowed me to handle the cancellations properly. I can cancel my request to OpenAI, which saves me tokens and writes whatever tokens were received to the db with a "cancel" type. I did not expect the edge function to receive the AbortController signal, even though the examples use this as the cancellation mechanism. The example of using the I found that the AIStream doesn't use the I am using the pages directory in our app, assuming that the examples only use the new app directory to highlight it rather than it being a requirement. |
@jensen were you able to confirm this behavior? Based on my experiments, it seems that sending a cancel signal to OpenAI does not actually reduce the token usage of the request... related: openai/openai-node#134 |
There are a lot of moving pieces, but I have successfully cancelled the request using both Deno and Fastify with Node 20. I have my own Open AI API account that has no traffic. I use this to confirm a long completion that is cancelled uses the number of tokens that matches what I calculate using tiktoken vs what it would have if it finished. My prompt would be "Write a blog post about React". When I cancel it after a few sentences, the usage on the Open AI dashboard matches after an approximate delay of 10 minutes. |
It's good to know that cancellation is indeed possible with OpenAI's API. Now, the next steps lie in vercel / nextjs side... |
It looks like this was released today. https://github.com/vercel-labs/ai-chatbot It has a stop-generating button https://chat.vercel.ai/ which makes sense since the SDK has a stop API through the hooks. I guess my next step is to clone this, set up some logging and deploy it to Vercel to double-check how it is behaving on the server. Perhaps the development environment isn't a good one to test this in, it hasn't been in the past. |
Thank you for taking the time to verify this. However, based on the source code, I predict that it won't work because the I'm eagerly anticipating the result. |
Seems like it stops the new tokens to be displayed on the screen, but the token production still goes on between the Vercel Edge and OpenAI, so you'll be charged for the full amount and incur rate limiting while many generations are still going on in the background. |
I was able to spend some time testing this tonight. I deployed a version of the ai-chatbot application with some additional logging. I am confident the cancellation works as intended when using the edge runtime. Tokens are not generated on the server, in the background, after cancellation. This is excellent news. I haven't done as much testing as I want to for our production application, but my next step will be to test my streaming changes using our staging environment. I likely won't get to this in the next few days since we have already shipped our cancellation feature using AWS. I will still want to move these endpoints back to Vercel in July if I can. |
Seems related: vercel/edge-runtime#396 Hey @jridgewell , seems what you're working on is related to the issue discussed here? |
Hi! Yes, I'm working on getting proper streaming cancellation and back-pressure into Next.js. If you're using Next as your dev/production server, it's not currently possible to end the stream. Once vercel/next.js#51330 is merged and released, this should be fixed. |
### What? This is an alternative to #51330, which only support aborting the response (doesn't support back-pressure). If the client cancels the request (HMR update, navigates to another page, etc), we'll be able to detect that and stop pulling data from the dev's `ReadableStream` response. ### Why? We want to allow API routes to stream data coming from another server (eg, AI services). The responses from these other servers can be long running and expensive. In the case the browser aborts the connection, it's critical that we stop streaming data as soon as possible. ### How? By checking whether `response.closed` is set during the `for await (…)` iteration, we're able to detect that the client has aborted the connection. Cleanup of the `ReadableStream` is handled implicitly by the async iterator when the loop ends. The one catch is our use of http-proxy for worker processes. It does not properly detect a client disconnecting (but does handle back-pressure). In order to fix that, I've manually added event listeners to detect the disconnect and cancel the proxied req/res pair. Re: [WEB-1185](https://linear.app/vercel/issue/WEB-1185) (we still need back-pressure) Fixes #50364 Fixes vercel/ai#90
Hi all! We've merged vercel/next.js#51594, which implements cancellation only. We'll work on getting back-pressure support after verifying it's impact on Next's general streaming performance (Next is mainly for streaming React components, and we need to make sure that's not taking a hit). I don't think it's going to be an issue, but just need some time to verify. |
@jridgewell Anything special devs need to do for using cancellation? And what is back-pressure and what do we need to accomodate it? I work on one of the popular opensource GPT UIs (https://github.com/enricoros/big-agi) and I'm sure devs like us appreciate your fix. |
The Next.js team hasn't released a new version yet (I'll ping them to see if they can do a canary release), but once they have users just need to
Back-pressure is explained in https://sdk.vercel.ai/docs/concepts/backpressure-and-cancellation. Essentially, it's the ability for the server pause the stream because the client doesn't need more information yet. Next.js hasn't added support for it yet, but when they do, everyone will need to update their |
@jridgewell thanks for the explanation. Will try out the canary when available. I'm glad it doesn't require code changes (maybe some exception handling?) I was trying with AbortControllers and exceptions everywhere, but nothing worked for me wrt cancellations. |
v13.4.8-canary.0 just got released. If you update your project dependency, your dev server will support cancellation, and when you deploy that change your prod server should get it too. |
@jridgewell I tried canary.0 and .1, but somehow is not working for me, the server continues to pull events from OpenAI and feed pieces down the ReadableStream controller, despite closing the client browser window. When I close the socket to the server (physically closing the Chrome window of the edge function caller), this is what I see in the log of the dev server ( And this is the code that prints the streaming events (the error is printed by the edge server): This is within the ReadableStream.
I must be doing something wrong. |
Maybe catch the AbortError on the server and expect it to happen by returning null or something? Like the client hooks do: https://github.com/vercel-labs/ai/blob/main/packages/core/react/use-completion.ts#L179 Haven't played around with it just yet, just watching this issue 👀 edit: nvm the above, just tried it and can't seem to catch that error edit2: unfortunately it does not seem to abort the request to OpenAI as also stated above. The token usage reported on the OpenAI usage page is just too high for some aborted streams I just tested. It reports the usage as if I did not abort anything. |
Tried catching the AbortError but doesn't catch anything. Agree, the token usage keeps skyrocketing, a sign that the request to OpenAI servers keeps going.. |
Hi @enricoros: I'm not sure where your code is coming from, can you provide a link? Just based on reading the screenshot, it looks like you are the keeping the connection alive by eagerly pulling the data out of I'm assuming your |
Ok I will try it and report. Code below: https://github.com/enricoros/big-agi/blob/main/pages/api/openai/stream-chat.ts#L112 |
Reading your code, it's definitely possible to switch to a
|
To hook into this conversation. My implementation looks like this: https://sdk.vercel.ai/docs/api-reference/openai-stream#chat-model-example . So with all the methods the AI SDK provides. And the latest NextJS canary. Using the pages directory. I also tried passing in Edit: Manually aborting using a timeout of a few seconds with a new AbortController inside the Edge function and then passing the signal into the options of |
@jvandenaardweg: It seems the And, I've discovered that the |
@jridgewell awesome! Appreciate the quick response on this! Looking forward to try it out 👍 Also, could you re-open this issue until there's a verification it works? |
@jridgewell hi I did not detect the same problem in version v13.4.8-canary.5. |
…s for real cancellations This implementation has been largely inspired by the Vercel AI (stream) SDK, available at https://github.com/vercel-labs/ai/, and in particular by the work of @jridgewell on vercel/ai#90 and related issues. As soon as some pending changes land in edge-runtime and nextjs, we'll have full stream cancellation and tokens saving #57
Thanks for your help @jridgewell . Our app is now ported to use backpressure and cancellation, as you suggested. https://github.com/enricoros/big-agi/blob/490f8bdac30267662bee6b853ec8a3a303d2ab13/pages/api/llms/stream.ts#L141 I looked at the Test results (for when the client closes the connection):
Great progress - thanks! |
13.4.8 is out now, which fixes both issues from #90 (comment).
With vercel/next.js#51944 (released in v13.4.8-canary.12) and |
Thank you for all of your work on this @jridgewell. This closes a long-standing support ticket I opened in April and allows me to give my client some options. |
@jridgewell: tested with In our implementation (which is inspired by AIStream, using a TransformStream) I can see the connection stopping from the Node process to the OpenAI servers! GREAT! There's still an error message on the console (- error uncaughtException: Error: aborted), and maybe others won't see that, but apart from the scare effect, all the new changes seem to be working well! Our abort on the (browser) client side will stop the TransformStream on the (edge) server side, and the fetch to the OpenAI servers stops transmitting bytes too! Well done @jridgewell! |
Thanks @jridgewell , confirmed it works! Token usages reported on the OpenAI website matches with what you would expect when you cancel the stream. One future improvement could be to catch the abort in the Edge Function so we don't have a uncaught error, and allow to handle the abort, if that's even possible. The abort error:
In my use case I need to keep track of how many tokens are used. I already do this when started ( But I think that would fit better in a new issue here on Github. Many thanks all! |
We're on the same use cases @jvandenaardweg 👍 good request!! |
Thanks to the Vercel team (@jridgewell), an interruption of the stream on the client side will lead to the cancellation of the TransformStream on the servers side, which in turns cancels the open fetch() to the upstream. This was a long needed change and we are happy to report it works well. Related: #114 - vercel/ai#90 - vercel/edge-runtime#428 - trpc/trpc#4586 (enormous thanks to the tRPC team to issue a quick release as well)
If you found |
I'm working on Node 16 support in vercel/next.js#52281, it's currently blocked on a test that only fails in CI. |
# Summary Edge-runtime 2.4.4 has many bug fixes, but most importantly it adds support for stream cancellation to the edge runtime. This is extremely important since a lot of projects are using `streams` related to `ai`. They currently have no way of handling a cancellation coming from the client. This was introduced to `next` with as described by this comment: vercel/ai#90 (comment) You can find the PR for that here: vercel/next.js#51727 It also has a good description for what we're trying to do here, but for people not using `next` # Problem When a client sends an abort signal, it is currently not being handled by edge functions. This was fixed in [email protected] # Solution Update the package
@jvandenaardweg Same here. Any resolution yet? |
Unfortunately no. Just 2 possible workarounds I can think of:
I currently have option 1 in place in my app. Not optimal, but in terms of dev time the easiest I think. |
I have something similar to option 1, but I need to move it to the server. Guess I will remove to possibility to aboard for now. |
Hm, how did you guys manage to make |
Based on my experiments so far, it appears that the AbortController doesn't function properly on Vercel's hosted edge runtime. This observation aligns with the issue discussed in detail on the following GitHub issue: vercel/next.js#50364.
The text was updated successfully, but these errors were encountered: