-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lack of logging in the langchain_aws package #245
Comments
@cab938
from langchain_core.globals import set_debug
set_debug(True)
llm = ChatBedrockConverse(model="anthropic.claude-3-haiku-20240307-v1:0")
llm.invoke("What is the capital of France?")
# Output
[llm/start] [llm:ChatBedrockConverse] Entering LLM run with input:
{
"prompts": [
"Human: What is the capital of France?"
]
}
[llm/end] [llm:ChatBedrockConverse] [261ms] Exiting LLM run with output:
{
"generations": [
[
{
"text": "The capital of France is Paris.",
"generation_info": null,
"type": "ChatGeneration",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain",
"schema",
"messages",
"AIMessage"
],
"kwargs": {
"content": "The capital of France is Paris.",
"response_metadata": {
"ResponseMetadata": {
"RequestId": "e411108a-0918-4d28-8030-a9b16f7c8ea1",
"HTTPStatusCode": 200,
"HTTPHeaders": {
"date": "Mon, 21 Oct 2024 19:00:52 GMT",
"content-type": "application/json",
"content-length": "212",
"connection": "keep-alive",
"x-amzn-requestid": "e411108a-0918-4d28-8030-a9b16f7c8ea1"
},
"RetryAttempts": 0
},
"stopReason": "end_turn",
"metrics": {
"latencyMs": 220
}
},
"type": "ai",
"id": "run-15c8eb2f-7b15-44cd-8544-809ca1692036-0",
"usage_metadata": {
"input_tokens": 14,
"output_tokens": 10,
"total_tokens": 24
},
"tool_calls": [],
"invalid_tool_calls": []
}
}
}
]
],
"llm_output": null,
"run": null,
"type": "LLMResult"
}
|
@cab938 |
Added `logger_util` to enable package and class wide logging in `langchain-aws`. Added logging for `invoke` and `ainvoke`.
Added `logger_util` to enable package and class wide logging in `langchain-aws`. Added logging for `invoke` and `ainvoke`.
Added `logger_util` to enable package and class wide logging in `langchain-aws`. Added logging for `invoke` and `ainvoke`.
Added `logger_util` to enable package and class wide logging in `langchain-aws`. Added logging for `invoke` and `ainvoke`.
Added `logger_util` to enable package and class wide logging in `langchain-aws`. Added logging for `invoke` and `ainvoke`. langchain-ai#245
Added `logger_util` to enable package and class wide logging in `langchain-aws`. Added logging for `invoke` and `ainvoke`. langchain-ai#245
Added `logger_util` to enable package and class wide logging in `langchain-aws`. Added logging for `invoke` and `ainvoke`. langchain-ai#245
Added `logger_util` to enable package and class wide logging in `langchain-aws`. Added logging for `invoke` and `ainvoke`. langchain-ai#245
Added logging in `langchain-aws` for `invoke` and `converse`. ### Behaviour: export LANGCHAIN_AWS_DEBUG = True --> Enables all application debug messages ### Using ChatBedrockConverse: ``` def getChatBedrockConverse(): return ChatBedrockConverse( model_id="anthropic.claude-3-haiku-20240307-v1:0", region_name="us-east-1", ) # Invoke the llm response = llm.invoke("What is 2 times 10?") ``` #### Logging output: ``` (base) vishankp@7cf34de71c79 aws % poetry run python demo.py 2025-01-30 19:27:39,384 INFO | [bedrock_converse.py:514]| langchain_aws - The input message: [HumanMessage(content='What is 2 times 10?', additional_kwargs={}, response_metadata={})] 2025-01-30 19:27:39,384 DEBUG | [bedrock_converse.py:516]| langchain_aws - input message to bedrock: [{'role': 'user', 'content': [{'text': 'What is 2 times 10?'}]}] 2025-01-30 19:27:39,384 DEBUG | [bedrock_converse.py:517]| langchain_aws - System message to bedrock: [] 2025-01-30 19:27:39,384 DEBUG | [bedrock_converse.py:521]| langchain_aws - Input params: {'modelId': 'anthropic.claude-3-haiku-20240307-v1:0', 'inferenceConfig': {}} 2025-01-30 19:27:39,384 INFO | [bedrock_converse.py:522]| langchain_aws - Using Bedrock Converse API to generate response 2025-01-30 19:27:39,953 DEBUG | [bedrock_converse.py:526]| langchain_aws - Response from Bedrock: {'ResponseMetadata': {'RequestId': 'xxxxxxxx', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Fri, 31 Jan 2025 03:27:40 GMT', 'content-type': 'application/json', 'content-length': '192', 'connection': 'keep-alive', 'x-amzn-requestid': 'xxxxxxxxxx'}, 'RetryAttempts': 0}, 'output': {'message': {'role': 'assistant', 'content': [{'text': '2 x 10 = 20'}]}}, 'stopReason': 'end_turn', 'usage': {'inputTokens': 16, 'outputTokens': 13, 'totalTokens': 29}, 'metrics': {'latencyMs': 259}} ``` ### Using ChatBedrock to make an Invoke call: ``` def getChatBedrock(): return ChatBedrock( model_id="anthropic.claude-3-haiku-20240307-v1:0", region_name="us-east-1", ) # Invoke the llm response = llm.invoke("What is 2 times 10?") ``` #### Logging output ``` (base) vishankp@7cf34de71c79 aws % poetry run python demo.py 2025-01-30 19:34:59,823 INFO | [bedrock.py:530]| langchain_aws - The input message: [HumanMessage(content='What is 2 times 10?', additional_kwargs={}, response_metadata={})] 2025-01-30 19:34:59,823 DEBUG | [bedrock.py:830]| langchain_aws - Request body sent to bedrock: {'body': '{"anthropic_version": "bedrock-2023-05-31", "messages": [{"role": "user", "content": "What is 2 times 10?"}], "max_tokens": 1024}', 'modelId': 'anthropic.claude-3-haiku-20240307-v1:0', 'accept': 'application/json', 'contentType': 'application/json'} 2025-01-30 19:34:59,823 INFO | [bedrock.py:831]| langchain_aws - Using Bedrock Invoke API to generate response 2025-01-30 19:35:00,709 DEBUG | [bedrock.py:841]| langchain_aws - Response received from Bedrock: {'ResponseMetadata': {'RequestId': 'xxxxxxxxxxx', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Fri, 31 Jan 2025 03:35:00 GMT', 'content-type': 'application/json', 'content-length': '260', 'connection': 'keep-alive', 'x-amzn-requestid': 'xxxxxxxxxxxxx', 'x-amzn-bedrock-invocation-latency': '342', 'x-amzn-bedrock-output-token-count': '14', 'x-amzn-bedrock-input-token-count': '16'}, 'RetryAttempts': 0}, 'contentType': 'application/json', 'body': <botocore.response.StreamingBody object at 0x10553eb30>} 2025-01-30 19:35:00,709 INFO | [bedrock.py:593]| langchain_aws - The message received from Bedrock: 2 times 10 is 20. ``` --------- Co-authored-by: Vishal Patil <[email protected]> Co-authored-by: Michael Chin <[email protected]>
The
langchain_aws
package, in particular theChatBedrockConverse
class, has no standard python logging in it. This means visibility into issues is pretty dramatically reduced, and one has to either log at the boto3 level or across into AWS cloud resources. It would be useful to add more common python logging to the various classes in thelangchain_aws
package.As an example real use case, I want to see the body of any request and responses sent to boto so I can understand how the language model might be interpreting some of the data. I expected I would see them if I did something like:
logging.getLogger('langchain_aws').setLevel(logging.DEBUG)
However, that doesn't do anything because the module doesn't log, and the best I could do was:
logging.getLogger('boto3').setLevel(logging.DEBUG)
This shows me request headers (not body, which is a problem), and responses (including body, yay!)
The text was updated successfully, but these errors were encountered: