-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change min/max to overall_min/overall_max + update comparison results publisher #692
Conversation
… publisher Signed-off-by: Michael Oviedo <[email protected]>
Signed-off-by: Michael Oviedo <[email protected]>
@@ -193,24 +193,27 @@ def build_aggregated_results(self): | |||
|
|||
def calculate_weighted_average(self, task_metrics: Dict[str, List[Any]], iterations: int) -> Dict[str, Any]: | |||
weighted_metrics = {} | |||
num_executions = len(next(iter(task_metrics.values()))) | |||
total_iterations = iterations * num_executions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify, iterations
is the number of iterations that the user inputted in workload params / default number of times a task is run in a workload? Not the number of times we executed the same test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's correct!
osbenchmark/aggregator.py
Outdated
|
||
for metric, values in task_metrics.items(): | ||
if isinstance(values[0], dict): | ||
weighted_metrics[metric] = {} | ||
for item_key in values[0].keys(): | ||
if item_key == 'unit': | ||
weighted_metrics[metric][item_key] = values[0][item_key] | ||
elif item_key == 'min': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: We can improve the readability of this section by renaming item_key
to something like metric_field
osbenchmark/aggregator.py
Outdated
else: | ||
# for items like median or percentile values |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For fields like median
or containing percentile values
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM but suggested some quick fixes
Signed-off-by: Michael Oviedo <[email protected]>
Description
This change updates min/max values in aggregated test results, to reflect the overall min/max values across all test executions, rather than just the average of each. The names of these values have also been prefixed with
overall_
in order to reflect these changes. Changes were also made to theComparisonResultsPublisher
class, so these values can still be read if used in acompare
command. Tests were also updated to reflect the recent changes to the aggregator class, and also to test theComparisonResultsPublisher
class.Issues Resolved
#684
Testing
make test
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.