Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for generate and return tool_calls with Anthropic models #50

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

bigbernnn
Copy link
Contributor

@bigbernnn bigbernnn commented May 19, 2024

The proposed changes include:
1/ Ability to use tools with .generate()
2/ Returning stop_reason and tool_calls to AIMessage when using tools in the response metadata

Support for function call using the generate function not directly implemented in ChatBedrock.

from bedrock import ChatBedrock

chat = ChatBedrock(
    model_id=model_id,
    model_kwargs={"temperature": 0.1},
)

class GetWeather(BaseModel):
    """Get the current weather in a given location"""

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

llm_with_tools = chat.bind_tools([GetWeather])
llm_with_tools

messages = [
    HumanMessage(
        content="what is the weather like in San Francisco"
    )
]
ai_msg = llm_with_tools.generate(messages)
ai_msg

Copy link

@rsgrewal-aws rsgrewal-aws left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changes are good

Copy link
Collaborator

@3coins 3coins left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bigbernnn
Thanks for making this change. A minor suggestion to improve readability, looks good otherwise.

"role": role,
"content": message_content,
}
)
return system, formatted_messages
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A lot of this can be improved by early returns and refactoring to separate functions. I have not tested the validity of this change, so do check against existing logic before taking these changes as-is. Adding some unit tests to check the validity of these would be even better.

def _handle_anthropic_system_message(message):
    if not isinstance(message.content, str):
        raise ValueError(
            "System message must be a string, "
            f"instead was: {type(message.content)}"
        )
    return message.content, _message_type_lookups["human"]

def _handle_anthropic_message(message):
    role = _message_type_lookups[message.type]
    if isinstance(message.content, str):
        return message.content, role
    
    assert isinstance(message.content, list), "Anthropic message content must be str or list of dicts"
    
    content = []
    for item in message.content:
        if isinstance(item, str):
            content.append({"type": "text", "text": item})
        elif isinstance(item, dict):
            if "type" not in item:
                raise ValueError("Dict content item must have a type key")
            if item["type"] == "image_url":
                source = _format_image(item["image_url"]["url"])
                content.append({"type": "image", "source": source})
            else:
                content.append(item)
        else:
            raise ValueError(f"Content items must be str or dict, instead was: {type(item)}")
    return content, role

def _format_anthropic_messages(
    messages: List[BaseMessage],
) -> Tuple[Optional[str], List[Dict]]:
    """Format messages for anthropic."""
    """
    [
        {
            "role": _message_type_lookups[m.type],
            "content": [_AnthropicMessageContent(text=m.content).dict()],
        }
        for m in messages
    ]
    """    
    if isinstance(messages, langchain_system.BaseMessage):
        if messages.type == "system":
            system, role = _handle_anthropic_system_message(messages)
        else:
            content, role = _handle_anthropic_message(messages)
    return system, [{"role": role, "content": content}]
    
    system = None
    formatted_messages = []
    for i, message in enumerate(messages):
        if message.type == "system":
            if i != 0:
                raise ValueError("System message must be at beginning of message list.")
            system, _ = _handle_anthropic_system_message(message)
            continue
        content, role = _handle_anthropic_message(message)
        formatted_messages.append({"role": role, "content": content})
    return system, formatted_messages

Copy link
Contributor Author

@bigbernnn bigbernnn May 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implemented and tested.

@bigbernnn bigbernnn changed the title Support for function binding with generate from ChatBedrock Support for generate and return tool_calls with Anthropic models May 30, 2024
@Benjaminrivard
Copy link

Hi, Is there anything we can do to allow the PR to get merged ?

@marcoBongio
Copy link

Hi, is the tool_choice argument of bind_tools method functioning? Because I would like to force the LLM to use a tool, but it does not seem to work. Are there any workarounds to do so? Thank you in advance.

@ssg-kstewart
Copy link

For those coming across this PR in search of a solution, I had an issue where the tools were not actually being called. I have created a PR against the fork created by @bigbernnn which can be found here. This has resolved my related issue here.

@devesh2003
Copy link

Can this please be reviewed and merged? Can't switch my existing codebase to bedrock due to this error.

# llm = ChatOpenAI(model="gpt-4o", openai_api_key=OPENAI_API_KEY) [WORKS FINE]
# llm = ChatAnthropic(model="claude-3-opus-20240229") [WORKS FINE]
llm = ChatBedrock(
    model_id="anthropic.claude-3-sonnet-20240229-v1:0",
    model_kwargs=dict(temperature=0),
) [ERROR]

ValueError: System message must be a string, instead was: <class 'list'>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants