-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes langfuse LLM API call tracing to generation #397
Conversation
WalkthroughThe changes modify the Changes
Poem
Tip CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
cognee/infrastructure/llm/openai/adapter.py (1)
Line range hint
71-71
: Consider updating synchronous method decorator for consistency.The synchronous
create_structured_output
method has similar functionality but still uses the default@observe
decorator. Consider updating it to@observe(as_type='generation')
for consistency with its async counterpart.- @observe + @observe(as_type='generation') def create_structured_output(self, text_input: str, system_prompt: str, response_model: Type[BaseModel]) -> BaseModel:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cognee/infrastructure/llm/openai/adapter.py
(2 hunks)
🔇 Additional comments (1)
cognee/infrastructure/llm/openai/adapter.py (1)
47-47
: LGTM! Verify consistent usage of generation type.
The change from default tracing to explicit generation type aligns with the PR objective.
Let's verify if this change should be applied consistently:
monitoring = get_base_config().monitoring_tool | ||
if monitoring == MonitoringTool.LANGFUSE: | ||
from langfuse.decorators import observe |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Inconsistent monitoring configuration pattern needs attention
The configuration-based approach for monitoring tool selection is not consistently implemented across the codebase:
cognee/infrastructure/llm/openai/adapter.py
(reviewed file): Uses configuration with conditional import at module levelcognee/infrastructure/llm/generic_llm_api/adapter.py
: Uses configuration check inside a method with inline importcognee/api/v1/cognify/code_graph_pipeline.py
: Uses direct configuration check at module level similar to reviewed filecognee/base_config.py
: Sets LANGFUSE as default monitoring tool
The codebase shows three different patterns for handling the monitoring tool configuration and imports. These should be unified for better maintainability.
🔗 Analysis chain
LGTM! Verify consistent usage of monitoring configuration.
Good improvement moving from hardcoded check to configuration-based approach for monitoring tool selection.
Let's verify consistent usage of monitoring configuration across the codebase:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other direct MonitoringTool.LANGFUSE checks that might need similar updates
rg -l "MonitoringTool\.LANGFUSE"
# Search for other langfuse imports that might need to be conditional
rg "from langfuse"
Length of output: 505
Script:
#!/bin/bash
# Check how MonitoringTool.LANGFUSE is used in other files
rg -A 5 "MonitoringTool\.LANGFUSE" cognee/infrastructure/llm/generic_llm_api/adapter.py cognee/base_config.py cognee/api/v1/cognify/code_graph_pipeline.py
# Check the context of langfuse imports in other files
rg -B 2 -A 2 "from langfuse" cognee/infrastructure/llm/generic_llm_api/adapter.py cognee/api/v1/cognify/code_graph_pipeline.py
Length of output: 2938
Summary by CodeRabbit
New Features
Bug Fixes