Skip to content

Commit

Permalink
[pf-evals] Fix the user agent not populated (#3309)
Browse files Browse the repository at this point in the history
# Description

Please add an informative description that covers that changes made by
the pull request and link all relevant issues.

# All Promptflow Contribution checklist:
- [ ] **The pull request does not introduce [breaking changes].**
- [ ] **CHANGELOG is updated for new features, bug fixes or other
significant changes.**
- [ ] **I have read the [contribution guidelines](../CONTRIBUTING.md).**
- [ ] **Create an issue and link to the pull request to get dedicated
review from promptflow team. Learn more: [suggested
workflow](../CONTRIBUTING.md#suggested-workflow).**

## General Guidelines and Best Practices
- [ ] Title of the pull request is clear and informative.
- [ ] There are a small number of commits, each of which have an
informative message. This means that previously merged commits do not
appear in the history of the PR. For more information on cleaning up the
commits in your PR, [see this
page](https://github.com/Azure/azure-powershell/blob/master/documentation/development-docs/cleaning-up-commits.md).

### Testing Guidelines
- [ ] Pull request includes test coverage for the included changes.
  • Loading branch information
ninghu authored May 17, 2024
1 parent 5cc3847 commit 94c85e3
Showing 1 changed file with 9 additions and 5 deletions.
14 changes: 9 additions & 5 deletions src/promptflow-evals/promptflow/evals/evaluate/_evaluate.py
Original file line number Diff line number Diff line change
Expand Up @@ -372,14 +372,14 @@ def evaluate(
_validate_columns(input_data_df, evaluators, target=None, evaluator_config=evaluator_config)

# Batch Run
evaluator_info = {}
evaluators_info = {}
use_thread_pool = kwargs.get("_use_thread_pool", True)
batch_run_client = CodeClient() if use_thread_pool else pf_client

with BatchRunContext(batch_run_client):
for evaluator_name, evaluator in evaluators.items():
evaluator_info[evaluator_name] = {}
evaluator_info[evaluator_name]["run"] = batch_run_client.run(
evaluators_info[evaluator_name] = {}
evaluators_info[evaluator_name]["run"] = batch_run_client.run(
flow=evaluator,
run=target_run,
evaluator_name=evaluator_name,
Expand All @@ -388,10 +388,14 @@ def evaluate(
stream=True,
)

# get_details needs to be called within BatchRunContext scope in order to have user agent populated
for evaluator_name, evaluator_info in evaluators_info.items():
evaluator_info["result"] = batch_run_client.get_details(evaluator_info["run"], all_results=True)

# Concatenate all results
evaluators_result_df = None
for evaluator_name, evaluator_info in evaluator_info.items():
evaluator_result_df = batch_run_client.get_details(evaluator_info["run"], all_results=True)
for evaluator_name, evaluator_info in evaluators_info.items():
evaluator_result_df = evaluator_info["result"]

# drop input columns
evaluator_result_df = evaluator_result_df.drop(
Expand Down

0 comments on commit 94c85e3

Please sign in to comment.