-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Anthropic agent failure: 'roles must alternate between "user" and "assistant", but found multiple "user" roles in a row' #13
Comments
I could reproduce it be being impatient - sending 2 messages in a row, as seen on this screenshot: As the error message says, Anthropic has this peculiarity that messages must alternate between your own and the bot's. If there are 2 or more of yours one after the other, it's complaining. Once you get a chain of 2 or more, there's no getting out of it. This "error message" reported by the bot is not considered a genuine bot response and it's not part of the conversation, so it doesn't clear up the issue. I did not know of this API limitation and it doesn't seem like the library used by us magically handles it for us. I think it's a genuine use-case, especially if multiple people are participating in the discussion. Other providers do not seem to be exhibiting this behavior. This kind of "wait for your turn" conversation seems silly. They should handle it gracefully. I could work on a workaround specifically for Anthropic, which would analyze the conversation and combine multiple subsequent messages sent by you or others, to ensure the conversation is alternating between you and the bot. This way, you'd at least be able to get out of this trouble. Though in practice, it may not be exactly what you want. The Anthropic API would respond twice or 3 times (matching the number of messages you sent). Each subsequent response will include a little bit more context (one more message of yours), but will not include the new Anthropic message that is "in progress" right now. To use my screenshot as an example:
An alternative might be to have my 2nd "hey" block until an Anthropic response actually comes.. and to then trigger text-generation for (the initial conversation + "Hey" + bot's response + "hey"). But.. this is more complicated and it's Anthropic specific, so it's not worth doing it. |
Thank you, @spantaleev . Just to be clear: There is no need to send two messages right after each other to trigger the error. Even if I wait patiently for the bot to respond each time, the error message will appear after my third message. Always.
I can assure you that I was always waiting for the bot's response, strictly keeping to the expected roles. I just tried it again, with long pauses (10+ seconds) after each reply from the bot. Same behaviour: After my third message, the bot dies. This behaviour makes it impossible to work on anything with the Anthropic provider that takes more than three steps. Interesting, though, that other providers don't seem to have this problem. I tested OpenAI and Mistral and they can hold longer conversations without problem. |
Reported here #13 (comment) Fixed in etkecc/rust-mxlink@88fabb3
You're right @hardye - there were 2 bugs involved:
While working on the original "get thread messages" code (many months ago), I must have been thinking "keeping things simple and not using the pagination data is OK for now". Not having added a I think I had noticed models sometimes forgetting or miscounting things, but thought it's just how it works - they're imperfect. Since it generally worked quite well, I never thought to look into it more. Us dropping messages silently does explain this behavior. All these fixes are available in the v1.1.1 release. This is also the first release that the matrix-docker-ansible-deploy playbook pins (instead of using |
Thank you very much for the detailed explanation and the comprehensive fix, @spantaleev . I can confirm that working with the Anthropic provider works as expected now: Long conversations in a single thread are no problem anymore. Appreciate the swift response. |
I am currently testing the bot using a room-local agent based on the Anthropic provider. The bot responds fine to the initial question, but after two follow-up questions in the same thread it dies with the message:
After that, it won't respond to any commands anymore and simply repeats the error message.
This is the agent's current configuration:
To reproduce:
After this, the only response from the bot is the error message. Starting a new thread will work around this problem, but the same behaviour will occur after the same number of questions and responses.
Another agent using the OpenAI provider does not suffer the same problem. The bot cheerfully deals with long threads.
The text was updated successfully, but these errors were encountered: