Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update performance test structure #257

Closed
katxiao opened this issue Sep 28, 2021 · 0 comments
Closed

Update performance test structure #257

katxiao opened this issue Sep 28, 2021 · 0 comments
Assignees
Labels
internal The issue doesn't change the API or functionality
Milestone

Comments

@katxiao
Copy link
Contributor

katxiao commented Sep 28, 2021

Problem Description

Performance tests can become more generalized if we specify the expected runtime per dataset, instead of per individual test cases.

Expected behavior

The current performance tests are based on test cases that specify the transformer, the dataset generator and the sizes.

This should be changed to a more generic approach where each dataset generator indicates its expected output values for different data sizes, and the performance test simply takes a transformer and runs all the compatible datasets through it, validating against those expected output values.

@katxiao katxiao added the internal The issue doesn't change the API or functionality label Sep 28, 2021
@katxiao katxiao self-assigned this Sep 28, 2021
@amontanez24 amontanez24 added this to the 0.6.0 milestone Oct 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
internal The issue doesn't change the API or functionality
Projects
None yet
Development

No branches or pull requests

2 participants