Skip to content

Commit

Permalink
Users/singankit/fix remote extra requirement (Azure#38550)
Browse files Browse the repository at this point in the history
* Update _evaluate.py

* Update setup.py

* Update dev_requirements.txt

* Update CHANGELOG.md

* Update CHANGELOG.md

* Update CHANGELOG.md
  • Loading branch information
singankit authored Nov 15, 2024
1 parent 54d65c1 commit b63200a
Show file tree
Hide file tree
Showing 5 changed files with 15 additions and 17 deletions.
12 changes: 4 additions & 8 deletions sdk/evaluation/azure-ai-evaluation/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,10 @@
# Release History

## 1.1.0 (Unreleased)

### Features Added

### Breaking Changes
## 1.0.1 (Unreleased)

### Bugs Fixed

### Other Changes
- Fixed `[remote]` extra to be needed only when tracking results in Azure AI Studio.
- Removing `azure-ai-inference` as dependency.

## 1.0.0 (2024-11-13)

Expand Down Expand Up @@ -222,4 +218,4 @@ If `api_key` is not included in the `model_config`, the prompty runtime in `prom

- First preview
- This package is port of `promptflow-evals`. New features will be added only to this package moving forward.
- Added a `TypedDict` for `AzureAIProject` that allows for better intellisense and type checking when passing in project information
- Added a `TypedDict` for `AzureAIProject` that allows for better intellisense and type checking when passing in project information
Original file line number Diff line number Diff line change
Expand Up @@ -810,13 +810,15 @@ def eval_batch_run(
# Since tracing is disabled, pass None for target_run so a dummy evaluation run will be created each time.
target_run = None
trace_destination = _trace_destination_from_project_scope(azure_ai_project) if azure_ai_project else None
studio_url = _log_metrics_and_instance_results(
metrics,
result_df,
trace_destination,
target_run,
evaluation_name,
)
studio_url = None
if trace_destination:
studio_url = _log_metrics_and_instance_results(
metrics,
result_df,
trace_destination,
target_run,
evaluation_name,
)

result_df_dict = result_df.to_dict("records")
result: EvaluationResult = {"rows": result_df_dict, "metrics": metrics, "studio_url": studio_url} # type: ignore
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------

VERSION = "1.1.0"
VERSION = "1.0.1"
1 change: 1 addition & 0 deletions sdk/evaluation/azure-ai-evaluation/dev_requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@ pytest-asyncio
pytest-cov
pytest-mock
pytest-xdist
azure-ai-inference>=1.0.0b4
-e ../azure-ai-evaluation[remote]
1 change: 0 additions & 1 deletion sdk/evaluation/azure-ai-evaluation/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,6 @@
extras_require={
"remote": [
"promptflow-azure<2.0.0,>=1.15.0",
"azure-ai-inference>=1.0.0b4",
],
},
project_urls={
Expand Down

0 comments on commit b63200a

Please sign in to comment.