-
Notifications
You must be signed in to change notification settings - Fork 884
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Ensure a better error stack trace when llama-stack is not built #950
Conversation
Hi @cdoern! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Could you sign the individual CLA?
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -254,8 +254,12 @@ async def request( | |||
stream=False, | |||
stream_cls=None, | |||
): | |||
if not self.endpoint_impls: | |||
raise ValueError("Client not initialized") | |||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a better fix is to just initialize endpoint_impls = None
in the constructor so that the correct ValueError is raised here instead of a weird AttributeError
. Trapping an AttributeError is kind of unpredictable anyhow.
currently this is the output when you run a distribution locally without running `llama stack build`: ``` Traceback (most recent call last): File "/Users/charliedoern/Documents/llama-sdk.py", line 25, in <module> models = client.models.list() ^^^^^^^^^^^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack-client-python/src/llama_stack_client/resources/models.py", line 107, in list raise exc File "/Users/charliedoern/Documents/llama-stack-client-python/src/llama_stack_client/resources/models.py", line 95, in list return self._get( ^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack-client-python/src/llama_stack_client/_base_client.py", line 1212, in get return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack/llama_stack/distribution/library_client.py", line 168, in request return asyncio.run(self.async_client.request(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack/llama_stack/distribution/library_client.py", line 258, in request if not self.endpoint_impls: ^^^^^^^^^^^^^^^^^^^ AttributeError: 'AsyncLlamaStackAsLibraryClient' object has no attribute 'endpoint_impls' ``` the intended exception is never raised, initialize endpoint_impls properly so it is `None` and the ValueError can be raised Signed-off-by: Charlie Doern <[email protected]>
@terrytangyuan @ashwinb @raghotham is this good to go? |
…eta-llama#950) # What does this PR do? currently this is the output when you run a distribution locally without running `llama stack build`: ``` Traceback (most recent call last): File "/Users/charliedoern/Documents/llama-sdk.py", line 25, in <module> models = client.models.list() ^^^^^^^^^^^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack-client-python/src/llama_stack_client/resources/models.py", line 107, in list raise exc File "/Users/charliedoern/Documents/llama-stack-client-python/src/llama_stack_client/resources/models.py", line 95, in list return self._get( ^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack-client-python/src/llama_stack_client/_base_client.py", line 1212, in get return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack/llama_stack/distribution/library_client.py", line 168, in request return asyncio.run(self.async_client.request(*args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/Cellar/[email protected]/3.11.10/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/Users/charliedoern/Documents/llama-stack/llama_stack/distribution/library_client.py", line 258, in request if not self.endpoint_impls: ^^^^^^^^^^^^^^^^^^^ AttributeError: 'AsyncLlamaStackAsLibraryClient' object has no attribute 'endpoint_impls' ``` the intended exception is never raised, add an except for an AttributeError so users can catch when they call things like `models.list()` and so that a more useful error telling them that the client is not properly initialized is printed. ## Test Plan Please describe: - I ran the script found here: https://llama-stack.readthedocs.io/en/latest/getting_started/index.html#run-inference-with-python-sdk locally with the changes in this PR and the exception was caught successfully. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. --------- Signed-off-by: Charlie Doern <[email protected]> Co-authored-by: Ashwin Bharambe <[email protected]>
What does this PR do?
currently this is the output when you run a distribution locally without running
llama stack build
:the intended exception is never raised, add an except for an AttributeError so users can catch when they call things like
models.list()
and so that a more useful error telling them that the client is not properly initialized is printed.Test Plan
Please describe:
Before submitting
Pull Request section?