Skip to content

Commit

Permalink
Fixing prompty threadpool implementation (#3307)
Browse files Browse the repository at this point in the history
# Description

Please add an informative description that covers that changes made by
the pull request and link all relevant issues.

# All Promptflow Contribution checklist:
- [ ] **The pull request does not introduce [breaking changes].**
- [ ] **CHANGELOG is updated for new features, bug fixes or other
significant changes.**
- [ ] **I have read the [contribution guidelines](../CONTRIBUTING.md).**
- [ ] **Create an issue and link to the pull request to get dedicated
review from promptflow team. Learn more: [suggested
workflow](../CONTRIBUTING.md#suggested-workflow).**

## General Guidelines and Best Practices
- [ ] Title of the pull request is clear and informative.
- [ ] There are a small number of commits, each of which have an
informative message. This means that previously merged commits do not
appear in the history of the PR. For more information on cleaning up the
commits in your PR, [see this
page](https://github.com/Azure/azure-powershell/blob/master/documentation/development-docs/cleaning-up-commits.md).

### Testing Guidelines
- [ ] Pull request includes test coverage for the included changes.
  • Loading branch information
singankit authored May 17, 2024
1 parent 21bc040 commit 5cc3847
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,13 @@ def _calculate_metric(self, evaluator, input_df, column_mapping, evaluator_name)
row_metric_futures = []
row_metric_results = []
input_df = _apply_column_mapping(input_df, column_mapping)
parameters = {param.name for param in inspect.signature(evaluator).parameters.values()}
# Ignoring args and kwargs from the signature since they are usually catching extra arguments
parameters = {param.name for param in inspect.signature(evaluator).parameters.values()
if param.name not in ['args', 'kwargs']}
for value in input_df.to_dict("records"):
filtered_values = {k: v for k, v in value.items() if k in parameters}
# Filter out only the parameters that are present in the input data
# if no parameters then pass data as is
filtered_values = {k: v for k, v in value.items() if k in parameters} if len(parameters) > 0 else value
row_metric_futures.append(self._thread_pool.submit(evaluator, **filtered_values))

for row_number, row_metric_future in enumerate(row_metric_futures):
Expand Down
4 changes: 4 additions & 0 deletions src/promptflow-evals/tests/evals/e2etests/test_evaluate.py
Original file line number Diff line number Diff line change
Expand Up @@ -360,3 +360,7 @@ def test_evaluate_track_in_cloud_no_target(
assert remote_run is not None
assert remote_run["runMetadata"]["properties"]["_azureml.evaluation_run"] == "azure-ai-generative-parent"
assert remote_run["runMetadata"]["displayName"] == evaluation_name

@pytest.mark.skip(reason="TODO: Add test back")
def test_prompty_with_threadpool_implementation(self):
pass

0 comments on commit 5cc3847

Please sign in to comment.