Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Groq provider broken #314

Closed
ErikBjare opened this issue Dec 10, 2024 · 2 comments
Closed

Groq provider broken #314

ErikBjare opened this issue Dec 10, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@ErikBjare
Copy link
Owner

openai.BadRequestError: Error code: 400 - {'error': {'message': "'messages.0' : for 'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type': 'invalid_request_error'}}
@ErikBjare ErikBjare added the bug Something isn't working label Dec 10, 2024
@ErikBjare
Copy link
Owner Author

Fixed in 84c8316

@DrDavidL
Copy link

Still seeing same issue after installation:

[15:54:35] Browser tool available (using lynx)                                                                                                            
[15:54:39] Using model: groq/llama-3.3-70b-versatile                                                                                                          
[15:54:40] Using logdir ~/Library/Application Support/gptme/logs/2024-12-22-crawling-silly-robot                                                          
           Using workspace at ~                                                                                                                           
Skipped 1 hidden system messages, show with --show-hidden
--- ^^^ past messages ^^^ ---
User: hi
Traceback (most recent call last):                                                                                                                        
  File "/Users/david/.local/bin/gptme", line 10, in <module>
    sys.exit(main())
             ^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/click/core.py", line 1161, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/click/core.py", line 1082, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/click/core.py", line 1443, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/click/core.py", line 788, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/gptme/cli.py", line 282, in main
    chat(
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/gptme/chat.py", line 179, in chat
    for msg in step(
               ^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/gptme/chat.py", line 228, in step
    msg_response = reply(msgs, get_model().model, stream, tools)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/gptme/llm/__init__.py", line 57, in reply
    return _reply_stream(messages, model, tools)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/gptme/llm/__init__.py", line 102, in _reply_stream
    for char in (
                ^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/gptme/llm/__init__.py", line 103, in <genexpr>
    char for chunk in _stream(messages, model, tools) for char in chunk
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/gptme/llm/llm_openai.py", line 158, in stream
    for chunk_raw in openai.chat.completions.create(
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/openai/_utils/_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 859, in create
    return self._post(
           ^^^^^^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/openai/_base_client.py", line 1280, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/openai/_base_client.py", line 957, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/Users/david/Library/Application Support/pipx/venvs/gptme/lib/python3.12/site-packages/openai/_base_client.py", line 1061, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "'messages.0' : for 'role:system' the following must be satisfied[('messages.0.content' : value must be a string)]", 'type': 'invalid_request_error'}}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants