-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validate llm_config passed to ConversableAgent (issue #1522) #1654
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
sonichi
reviewed
Feb 13, 2024
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #1654 +/- ##
===========================================
+ Coverage 37.85% 49.68% +11.83%
===========================================
Files 50 50
Lines 5733 5742 +9
Branches 1297 1410 +113
===========================================
+ Hits 2170 2853 +683
+ Misses 3391 2659 -732
- Partials 172 230 +58
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
ekzhu
approved these changes
Feb 14, 2024
afourney
added a commit
that referenced
this pull request
Feb 15, 2024
* Logging (#1146) * WIP:logging * serialize request, response and client * Fixed code formatting. * Updated to use a global package, and added some test cases. Still very-much a draft. * Update work in progress. * adding cost * log new agent * update log_completion test in test_agent_telemetry * tests * fix formatting * Added additional telemetry for wrappers and clients. * WIP: add test for oai client and oai wrapper table * update test_telemetry * fix format * More tests, update doc and clean up * small fix for session id - moved to start_logging and return from start_logging * update start_logging type to return str, add notebook to demonstrate use of telemetry * add ability to get log dataframe * precommit formatting fixes * formatting fix * Remove pandas dependency from telemetry and only use in notebook * formatting fixes * log query exceptions * fix formatting * fix ci * fix comment - add notebook link in doc and fix groupchat serialization * small fix * do not serialize Agent * formatting * wip * fix test * serialization bug fix for soc moderator * fix test and clean up * wip: add version table * fix test * fix test * fix test * make the logging interface more general and fix client model logging * fix format * fix formatting and tests * fix * fix comment * Renaming telemetry to logging * update notebook * update doc * formatting * formatting and clean up * fix doc * fix link and title * fix notebook format and fix comment * format * try fixing agent test and update migration guide * fix link * debug print * debug * format * add back tests * fix tests --------- Co-authored-by: Adam Fourney <[email protected]> Co-authored-by: Victor Dibia <[email protected]> Co-authored-by: Chi Wang <[email protected]> * Validate llm_config passed to ConversableAgent (issue #1522) (#1654) * Validate llm_config passed to ConversableAgent Based on #1522, this commit implements the additional validation checks in `ConversableAgent.` Add the following validation and `raise ValueError` if: - The `llm_config` is `None`. - The `llm_config` is valid, but `config_list` is missing or lacks elements. - The `config_list` is valid, but no `model` is specified. The rest of the changes are code churn to adjust or add the test cases. * Validate llm_config passed to ConversableAgent Based on #1522, this commit implements the additional validation checks in `ConversableAgent.` Add the following validation and `raise ValueError` if: - The `llm_config` is `None` (validated in `ConversableAgent`). - The `llm_config` has no `model` specified and `config_list` is empty (validated in `OpenAIWrapper`). - The `config_list` has at least one entry, but not all the entries have the `model` is specified (validated in `OpenAIWrapper`). The rest of the changes are code churn to adjust or add the test cases. * Validate llm_config passed to ConversableAgent Based on #1522, this commit implements the additional validation checks in `ConversableAgent.` Add the following validation and `raise ValueError` if: - The `llm_config` is `None` (validated in `ConversableAgent`). - The `llm_config` has no `model` specified and `config_list` is empty (validated in `OpenAIWrapper`). - The `config_list` has at least one entry, but not all the entries have the `model` is specified (validated in `OpenAIWrapper`). The rest of the changes are code churn to adjust or add the test cases. * Validate llm_config passed to ConversableAgent Based on #1522, this commit implements the additional validation checks in `ConversableAgent.` Add the following validation and `raise ValueError` if: - The `llm_config` is `None` (validated in `ConversableAgent`). - The `llm_config` has no `model` specified and `config_list` is empty (validated in `OpenAIWrapper`). - The `config_list` has at least one entry, but not all the entries have the `model` is specified (validated in `OpenAIWrapper`). The rest of the changes are code churn to adjust or add the test cases. * Validate llm_config passed to ConversableAgent Based on #1522, this commit implements the additional validation checks in `ConversableAgent.` Add the following validation and `raise ValueError` if: - The `llm_config` is `None` (validated in `ConversableAgent`). - The `llm_config` has no `model` specified and `config_list` is empty (validated in `OpenAIWrapper`). - The `config_list` has at least one entry, but not all the entries have the `model` is specified (validated in `OpenAIWrapper`). The rest of the changes are code churn to adjust or add the test cases. * Fix the test_web_surfer issue For anyone reading this: you need to `pip install markdownify` for the `import WebSurferAgent` to succeed. That is needed to run the `test_web_surfer.py` locally. Test logic needs `llm_config` that is not `None` and that is not `False`. Let us pray that this works as part of GitHub actions ... * One more fix for llm_config validation contract * update test_utils * remove stale files --------- Co-authored-by: Adam Fourney <[email protected]> Co-authored-by: Victor Dibia <[email protected]> Co-authored-by: Chi Wang <[email protected]> Co-authored-by: Gunnar Kudrjavets <[email protected]>
whiskyboy
pushed a commit
to whiskyboy/autogen
that referenced
this pull request
Apr 17, 2024
…microsoft#1654) * Validate llm_config passed to ConversableAgent Based on microsoft#1522, this commit implements the additional validation checks in `ConversableAgent.` Add the following validation and `raise ValueError` if: - The `llm_config` is `None`. - The `llm_config` is valid, but `config_list` is missing or lacks elements. - The `config_list` is valid, but no `model` is specified. The rest of the changes are code churn to adjust or add the test cases. * Validate llm_config passed to ConversableAgent Based on microsoft#1522, this commit implements the additional validation checks in `ConversableAgent.` Add the following validation and `raise ValueError` if: - The `llm_config` is `None` (validated in `ConversableAgent`). - The `llm_config` has no `model` specified and `config_list` is empty (validated in `OpenAIWrapper`). - The `config_list` has at least one entry, but not all the entries have the `model` is specified (validated in `OpenAIWrapper`). The rest of the changes are code churn to adjust or add the test cases. * Validate llm_config passed to ConversableAgent Based on microsoft#1522, this commit implements the additional validation checks in `ConversableAgent.` Add the following validation and `raise ValueError` if: - The `llm_config` is `None` (validated in `ConversableAgent`). - The `llm_config` has no `model` specified and `config_list` is empty (validated in `OpenAIWrapper`). - The `config_list` has at least one entry, but not all the entries have the `model` is specified (validated in `OpenAIWrapper`). The rest of the changes are code churn to adjust or add the test cases. * Validate llm_config passed to ConversableAgent Based on microsoft#1522, this commit implements the additional validation checks in `ConversableAgent.` Add the following validation and `raise ValueError` if: - The `llm_config` is `None` (validated in `ConversableAgent`). - The `llm_config` has no `model` specified and `config_list` is empty (validated in `OpenAIWrapper`). - The `config_list` has at least one entry, but not all the entries have the `model` is specified (validated in `OpenAIWrapper`). The rest of the changes are code churn to adjust or add the test cases. * Validate llm_config passed to ConversableAgent Based on microsoft#1522, this commit implements the additional validation checks in `ConversableAgent.` Add the following validation and `raise ValueError` if: - The `llm_config` is `None` (validated in `ConversableAgent`). - The `llm_config` has no `model` specified and `config_list` is empty (validated in `OpenAIWrapper`). - The `config_list` has at least one entry, but not all the entries have the `model` is specified (validated in `OpenAIWrapper`). The rest of the changes are code churn to adjust or add the test cases. * Fix the test_web_surfer issue For anyone reading this: you need to `pip install markdownify` for the `import WebSurferAgent` to succeed. That is needed to run the `test_web_surfer.py` locally. Test logic needs `llm_config` that is not `None` and that is not `False`. Let us pray that this works as part of GitHub actions ... * One more fix for llm_config validation contract
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Based on #1522 (and a follow-up discussion in this PR), this commit implements the additional validation checks in
ConversableAgent.
Add the following validation and
raise ValueError
if:llm_config
isNone
(validated inConversableAgent
).llm_config
has nomodel
specified andconfig_list
is empty (validated inOpenAIWrapper
).config_list
has at least one entry, but not all the entries have themodel
is specified (validated inOpenAIWrapper
).The rest of the changes are code churn to adjust or add the test cases.
Why are these changes needed?
The root cause is #1522. The intent here is to enforce the contract for input parameters that are passed to
ConversableAgent.
With these changes the following happens:Related issue number
Hopefully fixes #1522.
Checks