You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should add a way to test RDT transformers based on performance and resource usage by running the profiling function recently added and comparing the obtained results with some expected maximum time and memory usage.
The tests should be implemented as a single test module with a single test file that is parameterized with a collection of test cases found in configuration files written in JSON format.
Here would be the requirements of this:
The test file should be in the module: tests/performance/test_performance.py.
The test cases should be inside a tests/performance/test_cases folder.
Test cases for each transformer can be put inside a folder dedicated to each one of them (e.g. tests/performance/test_cases/NumericalTransformer/<some_test_case>.json
The test function should be parametrized with the config files found inside the test_cases folder.
It should use the profile_transformer function on all the transformers and datasets defined by the config files
The text was updated successfully, but these errors were encountered:
This sounds good @amontanez24
I updated a bit the description to add some more details and also match what you have already proposed in the corresponding PR.
@amontanez24 Maybe we should also add a few more things to this:
Add a Makefile target (and tasks.py task) to run the performance tests alone
Modify the invoke pytest task to run unit and integration tests only
Add a specific check on the github workflow to run the performance tests on only one OS and python version, to prevent having multiple expected performances.
We should add a way to test RDT transformers based on performance and resource usage by running the
profiling
function recently added and comparing the obtained results with some expected maximum time and memory usage.The tests should be implemented as a single
test
module with a single test file that is parameterized with a collection of test cases found in configuration files written in JSON format.Here would be the requirements of this:
tests/performance/test_performance.py
.tests/performance/test_cases
folder.tests/performance/test_cases/NumericalTransformer/<some_test_case>.json
test_cases
folder.profile_transformer
function on all the transformers and datasets defined by the config filesThe text was updated successfully, but these errors were encountered: