-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide a callback whenever retry is triggered #1190
Comments
Thanks for the suggestion – I don't think we have a great way to do this right now, but it seems like a reasonable idea. I don't expect to get to it soon, but will keep the issue open. |
Can we just pass in a callback like this
And here do
|
I doubt that's how we'd design it. You could use httpx event_hooks: https://www.python-httpx.org/advanced/event-hooks/ |
Ah I didn't know we can use custom httpx Client. This works for us, thanks! |
Great! It'd be appreciated if you could share the (rough) code snippet you and up using, so others can benefit |
Sure, this block works for me
It's still a bit annoying to hack to import SyncHttpxClientWrapper, and manually set all the defaults(like base_url, timeout, follow_redirects, etc) for its constructor tho. If it's possible to expose SyncHttpxClientWrapper with all default arguments, it would make life much easier |
@rattrayalex : +1 on this request... Would you be open to a contribution to pass along We're really to avoid having to redefine the |
That's a good point – it'd be nice for us to expose the default args at least, if not a class which has those defaults. I don't think you should really need to import
I don't think we'd want it to be cc @RobertCraigie on the above – we should get this slated. |
Yes you definitely should not have to use It might be a good idea to provide something like this though, which would use our default options, thoughts? import openai
client = openai.OpenAI(
http_client=openai.DefaultHttpxClient(...),
) |
@rattrayalex @RobertCraigie |
I really like the idea of having a I'm having a bit of a hard time connecting the dots to go from I'm probably just missing some connection which is likely obvious to the maintainers here, so that's totally fine, just saying out loud that it'd be nice to get some docs on the usage here when this gets worked on. Appreciate the quick feedback, iterations on design and openness to implementation here a bunch, OpenAI team! |
You're correct it isn't trivial, which is why we will need to take our time designing it – and However, We plan to do both! |
Great to hear that there's potential for both of those approaches to be supported natively. Thanks for adding this to the roadmap and proactively engaging here! |
@rattrayalex @RobertCraigie Thanks! Is there any ETA for the straightforward |
OOC how is this blocking you? That class would just be changing the defaults and you can change the defaults yourself as well, e.g. this will be functionally exactly the same as the solution we provide import openai
import httpx
openai = openai.OpenAI(
http_client=httpx.Client(
timeout=openai.DEFAULT_TIMEOUT,
follow_redirects=True,
limits=httpx.Limits(max_connections=100, max_keepalive_connections=20), # also currently the httpx default
)
) |
@RobertCraigie Yeah this is not a blocker. We can move forward with this, but please keep us posted if the formal solution is out. Thanks! |
Was just digging through the client code and see there's a new I think this issue can be safely closed. Thanks a bunch! |
Thank you! |
Confirm this is a feature request for the Python library and not the underlying OpenAI API.
Describe the feature or improvement you're requesting
Is it possible to provide a callback whenever a retry is triggered internally, so that we can know when and how the requests failed?
Additional context
We want to give our users some insights if some OpenAI requests fail with rate limit error
The text was updated successfully, but these errors were encountered: