Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anthropic agent failure: 'roles must alternate between "user" and "assistant", but found multiple "user" roles in a row' #13

Closed
hardye opened this issue Sep 21, 2024 · 4 comments

Comments

@hardye
Copy link

hardye commented Sep 21, 2024

I am currently testing the bot using a room-local agent based on the Anthropic provider. The bot responds fine to the initial question, but after two follow-up questions in the same thread it dies with the message:

⚠️ Error: There was a problem performing text-generation via the NAME_OF_THE_AGENT agent:

> API error: Error response: error Api error: invalid_request_error messages: roles must alternate between "user" and "assistant", but found multiple "user" roles in a row

After that, it won't respond to any commands anymore and simply repeats the error message.

This is the agent's current configuration:

base_url: https://api.anthropic.com/v1
api_key: "..."
text_generation:
  model_id: claude-3-5-sonnet-20240620
  prompt: You are a brief, but helpful bot.
  temperature: 1.0
  max_response_tokens: 8192
  max_context_tokens: 204800

To reproduce:

  1. Follow the setup steps for the Anthropic provider
  2. Create an agent with the configuration shown above.
  3. Assign it as a room-specific handler for catch-all.
  4. Ask it any question and wait for it to respond.
  5. Open the thread created by the bot and respond to it, ask a question, whatever. Do this twice. Observe that the bot will reply without problems.
  6. Ask it one more follow-up question. Observe that this time the bot responds with the error-message: "API error: invalid_request_error messages: roles must alternate between 'user' and 'assistant', but found multiple 'user' roles in a row"

After this, the only response from the bot is the error message. Starting a new thread will work around this problem, but the same behaviour will occur after the same number of questions and responses.

Another agent using the OpenAI provider does not suffer the same problem. The bot cheerfully deals with long threads.

@spantaleev
Copy link
Contributor

I could reproduce it be being impatient - sending 2 messages in a row, as seen on this screenshot:

Screenshot_20240921_202814


As the error message says, Anthropic has this peculiarity that messages must alternate between your own and the bot's. If there are 2 or more of yours one after the other, it's complaining. Once you get a chain of 2 or more, there's no getting out of it. This "error message" reported by the bot is not considered a genuine bot response and it's not part of the conversation, so it doesn't clear up the issue.

I did not know of this API limitation and it doesn't seem like the library used by us magically handles it for us. I think it's a genuine use-case, especially if multiple people are participating in the discussion.

Other providers do not seem to be exhibiting this behavior. This kind of "wait for your turn" conversation seems silly. They should handle it gracefully.


I could work on a workaround specifically for Anthropic, which would analyze the conversation and combine multiple subsequent messages sent by you or others, to ensure the conversation is alternating between you and the bot. This way, you'd at least be able to get out of this trouble.

Though in practice, it may not be exactly what you want.
A human might see your 2nd (or 3rd message) and produce a single response that considers all of your previous messages.

The Anthropic API would respond twice or 3 times (matching the number of messages you sent). Each subsequent response will include a little bit more context (one more message of yours), but will not include the new Anthropic message that is "in progress" right now. To use my screenshot as an example:

  • my "Hey" will trigger it to reply to (the initial conversation + "Hey")
  • my 2nd "hey" will trigger it to reply to (the initial conversation + "Hey" + "hey")

An alternative might be to have my 2nd "hey" block until an Anthropic response actually comes.. and to then trigger text-generation for (the initial conversation + "Hey" + bot's response + "hey"). But.. this is more complicated and it's Anthropic specific, so it's not worth doing it.

@hardye
Copy link
Author

hardye commented Sep 21, 2024

Thank you, @spantaleev . Just to be clear: There is no need to send two messages right after each other to trigger the error. Even if I wait patiently for the bot to respond each time, the error message will appear after my third message. Always.

I could work on a workaround specifically for Anthropic, which would analyze the conversation and combine multiple subsequent messages sent by you or others, to ensure the conversation is alternating between you and the bot.

I can assure you that I was always waiting for the bot's response, strictly keeping to the expected roles. I just tried it again, with long pauses (10+ seconds) after each reply from the bot. Same behaviour: After my third message, the bot dies.

This behaviour makes it impossible to work on anything with the Anthropic provider that takes more than three steps. Interesting, though, that other providers don't seem to have this problem. I tested OpenAI and Mistral and they can hold longer conversations without problem.

@spantaleev
Copy link
Contributor

You're right @hardye - there were 2 bugs involved:

  • the original bug - Anthropic not liking consecutive messages. It's reproducible if you're sending messages quickly enough. Fixed by 8b12bdf

  • our "get thread messages" implementation in the mxlink library was deficient and was not using pagination to get all messages. By default (unless you set an explicit limit), it seems like Synapse (at least) only returns about 2-3 messages when asked for thread-related messages. Our code was not making use of the pagination data in the response and was not going beyond that. We were effectively dropping messages from the context. As luck would have it, this could cause 2 consecutive messages to appear (your original thread starting message + whatever we fetched related to it). Funny how we hadn't noticed this dropping of messages in real usage until now. Fixed in etkecc/rust-mxlink@88fabb3.

While working on the original "get thread messages" code (many months ago), I must have been thinking "keeping things simple and not using the pagination data is OK for now". Not having added a TODO there to remind me, I had never gotten around to implementing pagination properly until now.

I think I had noticed models sometimes forgetting or miscounting things, but thought it's just how it works - they're imperfect. Since it generally worked quite well, I never thought to look into it more. Us dropping messages silently does explain this behavior.

All these fixes are available in the v1.1.1 release. This is also the first release that the matrix-docker-ansible-deploy playbook pins (instead of using :latest).

@hardye
Copy link
Author

hardye commented Sep 22, 2024

Thank you very much for the detailed explanation and the comprehensive fix, @spantaleev . I can confirm that working with the Anthropic provider works as expected now: Long conversations in a single thread are no problem anymore. Appreciate the swift response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants