Skip to content

Commit

Permalink
Improved logging (#796)
Browse files Browse the repository at this point in the history
* Simplify how we log stuff and get rid of the ContextualLogger

* Allow configuring the logger destination

* Use Langchain logger on OpenAI client requests

* Reduce verbosity of logger by replacing info msgs with debug

* Use Langchain logger on Google LLMs http requests

* Reuse logger formatter

* Revert unwanted changes

* Update changelog
  • Loading branch information
dferrazm authored Sep 27, 2024
1 parent 44417ff commit 874c7e4
Show file tree
Hide file tree
Showing 27 changed files with 191 additions and 270 deletions.
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
## [Unreleased]
- Deprecate Langchain::LLM::GooglePalm
- Allow setting response_object: {} parameter when initializing supported Langchain::LLM::* classes
- Simplify and consolidate logging for some of the LLM providers (namely OpenAI and Google). Now most of the HTTP requests are being logged when on DEBUG level
- Improve doc on how to set up a custom logger with a custom destination

## [0.16.0] - 2024-09-19
- Remove `Langchain::Thread` class as it was not needed.
Expand Down
1 change: 0 additions & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@ PATH
json-schema (~> 4)
matrix
pragmatic_segmenter (~> 0.3.0)
rainbow (~> 3.1.0)
zeitwerk (~> 2.5)

GEM
Expand Down
11 changes: 9 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -626,11 +626,18 @@ Additional examples available: [/examples](https://github.com/andreibondarev/lan

## Logging

Langchain.rb uses standard logging mechanisms and defaults to `:warn` level. Most messages are at info level, but we will add debug or warn statements as needed.
Langchain.rb uses the standard Ruby [Logger](https://ruby-doc.org/stdlib-2.4.0/libdoc/logger/rdoc/Logger.html) mechanism and defaults to same `level` value (currently `Logger::DEBUG`).

To show all log messages:

```ruby
Langchain.logger.level = :debug
Langchain.logger.level = Logger::DEBUG
```

The logger logs to `STDOUT` by default. In order to configure the log destination (ie. log to a file) do:

```ruby
Langchain.logger = Logger.new("path/to/file", **Langchain::LOGGER_OPTIONS)
```

## Problems
Expand Down
1 change: 0 additions & 1 deletion langchain.gemspec
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ Gem::Specification.new do |spec|
# dependencies
# Not sure if we should require this as it only applies to OpenAI usecase.
spec.add_dependency "baran", "~> 0.1.9"
spec.add_dependency "rainbow", "~> 3.1.0"
spec.add_dependency "json-schema", "~> 4"
spec.add_dependency "zeitwerk", "~> 2.5"
spec.add_dependency "pragmatic_segmenter", "~> 0.3.0"
Expand Down
61 changes: 47 additions & 14 deletions lib/langchain.rb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@

require "logger"
require "pathname"
require "rainbow"
require "zeitwerk"
require "uri"
require "json"
Expand Down Expand Up @@ -92,24 +91,58 @@
# Langchain.logger.level = :info
module Langchain
class << self
# @return [ContextualLogger]
attr_reader :logger

# @param logger [Logger]
# @return [ContextualLogger]
def logger=(logger)
@logger = ContextualLogger.new(logger)
end

# @return [Logger]
attr_accessor :logger
# @return [Pathname]
attr_reader :root
end

self.logger ||= ::Logger.new($stdout, level: :debug)

@root = Pathname.new(__dir__)

module Errors
class BaseError < StandardError; end
end

module Colorizer
class << self
def red(str)
"\e[31m#{str}\e[0m"
end

def green(str)
"\e[32m#{str}\e[0m"
end

def yellow(str)
"\e[33m#{str}\e[0m"
end

def blue(str)
"\e[34m#{str}\e[0m"
end

def colorize_logger_msg(msg, severity)
return msg unless msg.is_a?(String)

return red(msg) if severity.to_sym == :ERROR
return yellow(msg) if severity.to_sym == :WARN
msg
end
end
end

LOGGER_OPTIONS = {
progname: "Langchain.rb",

formatter: ->(severity, time, progname, msg) do
Logger::Formatter.new.call(
severity,
time,
"[#{progname}]",
Colorizer.colorize_logger_msg(msg, severity)
)
end
}.freeze

self.logger ||= ::Logger.new($stdout, **LOGGER_OPTIONS)

@root = Pathname.new(__dir__)
end
12 changes: 6 additions & 6 deletions lib/langchain/assistants/assistant.rb
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def add_messages(messages:)
# @return [Array<Langchain::Message>] The messages
def run(auto_tool_execution: false)
if messages.empty?
Langchain.logger.warn("No messages to process")
Langchain.logger.warn("#{self.class} - No messages to process")
@state = :completed
return
end
Expand Down Expand Up @@ -272,7 +272,7 @@ def process_latest_message
#
# @return [Symbol] The completed state
def handle_system_message
Langchain.logger.warn("At least one user message is required after a system message")
Langchain.logger.warn("#{self.class} - At least one user message is required after a system message")
:completed
end

Expand All @@ -287,7 +287,7 @@ def handle_llm_message
#
# @return [Symbol] The failed state
def handle_unexpected_message
Langchain.logger.error("Unexpected message role encountered: #{messages.last.standard_role}")
Langchain.logger.error("#{self.class} - Unexpected message role encountered: #{messages.last.standard_role}")
:failed
end

Expand All @@ -311,7 +311,7 @@ def set_state_for(response:)
elsif response.completion # Currently only used by Ollama
:completed
else
Langchain.logger.error("LLM response does not contain tool calls, chat or completion response")
Langchain.logger.error("#{self.class} - LLM response does not contain tool calls, chat or completion response")
:failed
end
end
Expand All @@ -323,7 +323,7 @@ def execute_tools
run_tools(messages.last.tool_calls)
:in_progress
rescue => e
Langchain.logger.error("Error running tools: #{e.message}; #{e.backtrace.join('\n')}")
Langchain.logger.error("#{self.class} - Error running tools: #{e.message}; #{e.backtrace.join('\n')}")
:failed
end

Expand Down Expand Up @@ -355,7 +355,7 @@ def initialize_instructions
#
# @return [Langchain::LLM::BaseResponse] The LLM response object
def chat_with_llm
Langchain.logger.info("Sending a call to #{llm.class}", for: self.class)
Langchain.logger.debug("#{self.class} - Sending a call to #{llm.class}")

params = @llm_adapter.build_chat_params(
instructions: @instructions,
Expand Down
68 changes: 0 additions & 68 deletions lib/langchain/contextual_logger.rb

This file was deleted.

32 changes: 16 additions & 16 deletions lib/langchain/llm/google_gemini.rb
Original file line number Diff line number Diff line change
Expand Up @@ -59,15 +59,7 @@ def chat(params = {})

uri = URI("https://generativelanguage.googleapis.com/v1beta/models/#{parameters[:model]}:generateContent?key=#{api_key}")

request = Net::HTTP::Post.new(uri)
request.content_type = "application/json"
request.body = parameters.to_json

response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == "https") do |http|
http.request(request)
end

parsed_response = JSON.parse(response.body)
parsed_response = http_post(uri, parameters)

wrapped_response = Langchain::LLM::GoogleGeminiResponse.new(parsed_response, model: parameters[:model])

Expand Down Expand Up @@ -95,17 +87,25 @@ def embed(

uri = URI("https://generativelanguage.googleapis.com/v1beta/models/#{model}:embedContent?key=#{api_key}")

request = Net::HTTP::Post.new(uri)
parsed_response = http_post(uri, params)

Langchain::LLM::GoogleGeminiResponse.new(parsed_response, model: model)
end

private

def http_post(url, params)
http = Net::HTTP.new(url.hostname, url.port)
http.use_ssl = url.scheme == "https"
http.set_debug_output(Langchain.logger) if Langchain.logger.debug?

request = Net::HTTP::Post.new(url)
request.content_type = "application/json"
request.body = params.to_json

response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == "https") do |http|
http.request(request)
end

parsed_response = JSON.parse(response.body)
response = http.request(request)

Langchain::LLM::GoogleGeminiResponse.new(parsed_response, model: model)
JSON.parse(response.body)
end
end
end
39 changes: 19 additions & 20 deletions lib/langchain/llm/google_vertex_ai.rb
Original file line number Diff line number Diff line change
Expand Up @@ -63,16 +63,7 @@ def embed(

uri = URI("#{url}#{model}:predict")

request = Net::HTTP::Post.new(uri)
request.content_type = "application/json"
request["Authorization"] = "Bearer #{@authorizer.fetch_access_token!["access_token"]}"
request.body = params.to_json

response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == "https") do |http|
http.request(request)
end

parsed_response = JSON.parse(response.body)
parsed_response = http_post(uri, params)

Langchain::LLM::GoogleGeminiResponse.new(parsed_response, model: model)
end
Expand All @@ -96,16 +87,7 @@ def chat(params = {})

uri = URI("#{url}#{parameters[:model]}:generateContent")

request = Net::HTTP::Post.new(uri)
request.content_type = "application/json"
request["Authorization"] = "Bearer #{@authorizer.fetch_access_token!["access_token"]}"
request.body = parameters.to_json

response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == "https") do |http|
http.request(request)
end

parsed_response = JSON.parse(response.body)
parsed_response = http_post(uri, parameters)

wrapped_response = Langchain::LLM::GoogleGeminiResponse.new(parsed_response, model: parameters[:model])

Expand All @@ -115,5 +97,22 @@ def chat(params = {})
raise StandardError.new(parsed_response)
end
end

private

def http_post(url, params)
http = Net::HTTP.new(url.hostname, url.port)
http.use_ssl = url.scheme == "https"
http.set_debug_output(Langchain.logger) if Langchain.logger.debug?

request = Net::HTTP::Post.new(url)
request.content_type = "application/json"
request["Authorization"] = "Bearer #{@authorizer.fetch_access_token!["access_token"]}"
request.body = params.to_json

response = http.request(request)

JSON.parse(response.body)
end
end
end
6 changes: 5 additions & 1 deletion lib/langchain/llm/openai.rb
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,11 @@ class OpenAI < Base
def initialize(api_key:, llm_options: {}, default_options: {})
depends_on "ruby-openai", req: "openai"

@client = ::OpenAI::Client.new(access_token: api_key, **llm_options, log_errors: true)
llm_options[:log_errors] = Langchain.logger.debug? unless llm_options.key?(:log_errors)

@client = ::OpenAI::Client.new(access_token: api_key, **llm_options) do |f|
f.response :logger, Langchain.logger, {headers: true, bodies: true, errors: true}
end

@defaults = DEFAULTS.merge(default_options)
chat_parameters.update(
Expand Down
2 changes: 1 addition & 1 deletion lib/langchain/prompt/loading.rb
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ def load_few_shot_prompt(config)
def load_from_config(config)
# If `_type` key is not present in the configuration hash, add it with a default value of `prompt`
unless config.key?("_type")
Langchain.logger.warn "No `_type` key found, defaulting to `prompt`"
Langchain.logger.warn("#{self.class} - No `_type` key found, defaulting to `prompt`")
config["_type"] = "prompt"
end

Expand Down
2 changes: 1 addition & 1 deletion lib/langchain/tool/calculator.rb
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ def initialize
# @param input [String] math expression
# @return [String] Answer
def execute(input:)
Langchain.logger.info("Executing \"#{input}\"", for: self.class)
Langchain.logger.debug("#{self.class} - Executing \"#{input}\"")

Eqn::Calculator.calc(input)
rescue Eqn::ParseError, Eqn::NoVariableValueError
Expand Down
Loading

0 comments on commit 874c7e4

Please sign in to comment.