Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatBedrockConverse doesn't support structured output with Bedrock Llama 3.3 #359

Open
tioans opened this issue Feb 11, 2025 · 0 comments
Open

Comments

@tioans
Copy link

tioans commented Feb 11, 2025

Description

I am trying to get browser-use to work with Llama 3.3 via Bedrock, but I get an error when trying to use structured output. Structured output is necessary to make the model work in this context. It seems the issue comes from the with_structured_output method, as I can execute calls without it, but in that case, the output is not correct.

Error

"'RunnableSequence' object has no attribute 'with_structured_output'"

Steps to reproduce

Structured output

import asyncio
from dotenv import load_dotenv
from typing import Any, Dict, List

from pydantic import BaseModel, Field

from langchain_openai import AzureChatOpenAI
from langchain_ollama import ChatOllama
from langchain_aws import ChatBedrockConverse
from browser_use import Agent, Browser, BrowserConfig

class StructuredOutput(BaseModel):

    current_state: CurrentState = Field(
            description="Contains the current state details of the browser page."
        )
    
        action: List[Dict[str, Dict[str, Any]]] = Field(
            description=(
                "A list of actions to be executed in sequence. Each action should have one key "
                "that represents the action name and a value with the corresponding parameters."
            )
        )
 

Main method:

async def main():

    llm = ChatBedrockConverse(
            model_id="meta.llama3-3-70b-instruct-v1:0",
            temperature=0,
            verbose=True
        )

    structured_llm = llm.with_structured_output(StructuredOutput, include_raw=False)

    agent = Agent(
        task="Search for the weather in Copenhagen and return the latest.",
        llm=structured_llm,
        validate_output=True
    )
    result = await agent.run()
    print(result)


if __name__ == "__main__":
    asyncio.run(main())
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants