Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

openai.InternalServerError: Error code: 500:ADE(connect to local letta server) fails to send message after meets the max Context Window,details are below #2410

Open
1 task
nj-guiqi opened this issue Feb 3, 2025 · 3 comments

Comments

@nj-guiqi
Copy link

nj-guiqi commented Feb 3, 2025

Describe the bug
A clear and concise description of what the bug is.

Please describe your setup

Screenshots
I can't send message to agent after the Context Window meets its MAX(set 4000 token)

Image

I check the docker log:

2025-02-03 16:33:13   File "/app/letta/server/rest_api/utils.py", line 62, in sse_async_generator
2025-02-03 16:33:13     usage = await usage_task
2025-02-03 16:33:13             ^^^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/usr/lib/python3.11/asyncio/threads.py", line 25, in to_thread
2025-02-03 16:33:13     return await loop.run_in_executor(None, func_call)
2025-02-03 16:33:13            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run
2025-02-03 16:33:13     result = self.fn(*self.args, **self.kwargs)
2025-02-03 16:33:13              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/app/letta/server/server.py", line 772, in send_messages
2025-02-03 16:33:13     return self._step(actor=actor, agent_id=agent_id, input_messages=message_objects, interface=interface)
2025-02-03 16:33:13            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/app/letta/server/server.py", line 458, in _step
2025-02-03 16:33:13     usage_stats = letta_agent.step(
2025-02-03 16:33:13                   ^^^^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/app/letta/agent.py", line 613, in step
2025-02-03 16:33:13     step_response = self.inner_step(
2025-02-03 16:33:13                     ^^^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/app/letta/agent.py", line 854, in inner_step
2025-02-03 16:33:13     raise e
2025-02-03 16:33:13   File "/app/letta/agent.py", line 721, in inner_step
2025-02-03 16:33:13     response = self._get_ai_reply(
2025-02-03 16:33:13                ^^^^^^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/app/letta/agent.py", line 327, in _get_ai_reply
2025-02-03 16:33:13     raise e
2025-02-03 16:33:13   File "/app/letta/agent.py", line 290, in _get_ai_reply
2025-02-03 16:33:13     response = create(
2025-02-03 16:33:13                ^^^^^^^
2025-02-03 16:33:13   File "/app/letta/llm_api/llm_api_tools.py", line 91, in wrapper
2025-02-03 16:33:13     raise e
2025-02-03 16:33:13   File "/app/letta/llm_api/llm_api_tools.py", line 58, in wrapper
2025-02-03 16:33:13     return func(*args, **kwargs)
2025-02-03 16:33:13            ^^^^^^^^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/app/letta/llm_api/llm_api_tools.py", line 177, in create
2025-02-03 16:33:13     response = openai_chat_completions_request(
2025-02-03 16:33:13                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/app/letta/llm_api/openai.py", line 413, in openai_chat_completions_request
2025-02-03 16:33:13     chat_completion = client.chat.completions.create(**data)
2025-02-03 16:33:13                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/app/.venv/lib/python3.11/site-packages/openai/_utils/_utils.py", line 279, in wrapper
2025-02-03 16:33:13     return func(*args, **kwargs)
2025-02-03 16:33:13            ^^^^^^^^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/app/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 863, in create
2025-02-03 16:33:13     return self._post(
2025-02-03 16:33:13            ^^^^^^^^^^^
2025-02-03 16:33:13   File "/app/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1283, in post
2025-02-03 16:33:13     return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
2025-02-03 16:33:13                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/app/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 960, in request
2025-02-03 16:33:13     return self._request(
2025-02-03 16:33:13            ^^^^^^^^^^^^^^
2025-02-03 16:33:13   File "/app/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1064, in _request
2025-02-03 16:33:13     raise self._make_status_error_from_response(err.response) from None
2025-02-03 16:33:13 openai.InternalServerError: Error code: 500 - {'detail': 'Internal server error (unpack): '}
@wsargent
Copy link

Same issue here.

@Autuk
Copy link

Autuk commented Feb 18, 2025

Same issue.

@LixinLu42
Copy link

LixinLu42 commented Feb 20, 2025

now, my ADE is disapper Context Window Size option.

Image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants