Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: add basic testing #15

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Conversation

Borda
Copy link
Contributor

@Borda Borda commented Jun 29, 2023

to ensure reproducibility and allow any future contributions

@Borda
Copy link
Contributor Author

Borda commented Jun 29, 2023

@jfsantos could you please help me with the tests? I have some problems making them pass by running python setup.py test 😖 how do you test the code/package locally?

FYI: running the CI on the fork, ref: Lightning-Sandbox#1

@quancs
Copy link
Contributor

quancs commented Jun 30, 2023

Here are the results by running python setup.py test. all failed.

running test
WARNING: Testing via this command is deprecated and will be removed in a future version. Users looking for a generic test entry point independent of test runner are encouraged to use tox.
/mnt/home/quancs/miniconda3/envs/torch2/lib/python3.9/site-packages/setuptools/command/test.py:194: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!

        ********************************************************************************
        Requirements should be satisfied by a PEP 517 installer.
        If you are using pip, you can try `pip install --use-pep517`.
        ********************************************************************************

!!
  ir_d = dist.fetch_build_eggs(dist.install_requires)
/mnt/home/quancs/miniconda3/envs/torch2/lib/python3.9/site-packages/setuptools/command/test.py:195: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!

        ********************************************************************************
        Requirements should be satisfied by a PEP 517 installer.
        If you are using pip, you can try `pip install --use-pep517`.
        ********************************************************************************

!!
  tr_d = dist.fetch_build_eggs(dist.tests_require or [])
/mnt/home/quancs/miniconda3/envs/torch2/lib/python3.9/site-packages/setuptools/command/test.py:196: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated.
!!

        ********************************************************************************
        Requirements should be satisfied by a PEP 517 installer.
        If you are using pip, you can try `pip install --use-pep517`.
        ********************************************************************************

!!
  er_d = dist.fetch_build_eggs(
running egg_info
writing SRMRpy.egg-info/PKG-INFO
writing dependency_links to SRMRpy.egg-info/dependency_links.txt
writing entry points to SRMRpy.egg-info/entry_points.txt
writing requirements to SRMRpy.egg-info/requires.txt
writing top-level names to SRMRpy.egg-info/top_level.txt
reading manifest file 'SRMRpy.egg-info/SOURCES.txt'
adding license file 'LICENSE.md'
writing manifest file 'SRMRpy.egg-info/SOURCES.txt'
running build_ext
test_srmr.test_srmr ... FAIL
test_srmr.test_srmr_fast ... FAIL
test_srmr.test_srmr_slow ... FAIL
test_srmr.test_srmr_slow_norm ... FAIL

======================================================================
FAIL: test_srmr.test_srmr
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/mnt/home/quancs/miniconda3/envs/torch2/lib/python3.9/site-packages/nose-1.3.7-py3.9.egg/nose/case.py", line 198, in runTest
    self.test(*self.arg)
  File "/mnt/home/quancs/projects/SRMRpy/test/test_srmr.py", line 20, in test_srmr
    assert np.allclose(ratio, correct_ratios[1], rtol=1e-6, atol=1e-12), np.max(np.abs(ratio - correct_ratios[1]))
AssertionError: 0.6745801230710526

======================================================================
FAIL: test_srmr.test_srmr_fast
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/mnt/home/quancs/miniconda3/envs/torch2/lib/python3.9/site-packages/nose-1.3.7-py3.9.egg/nose/case.py", line 198, in runTest
    self.test(*self.arg)
  File "/mnt/home/quancs/projects/SRMRpy/test/test_srmr.py", line 24, in test_srmr_fast
    assert np.allclose(ratio_norm_fast, correct_ratios[2], rtol=1e-6, atol=1e-12), np.max(np.abs(ratio_norm_fast - correct_ratios[2]))
AssertionError: 0.26627361490459345

======================================================================
FAIL: test_srmr.test_srmr_slow
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/mnt/home/quancs/miniconda3/envs/torch2/lib/python3.9/site-packages/nose-1.3.7-py3.9.egg/nose/case.py", line 198, in runTest
    self.test(*self.arg)
  File "/mnt/home/quancs/projects/SRMRpy/test/test_srmr.py", line 28, in test_srmr_slow
    assert np.allclose(ratio_slow, correct_ratios[0], rtol=1e-6, atol=1e-12), np.max(np.abs(ratio_slow - correct_ratios[0]))
AssertionError: 0.06157055712526649

======================================================================
FAIL: test_srmr.test_srmr_slow_norm
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/mnt/home/quancs/miniconda3/envs/torch2/lib/python3.9/site-packages/nose-1.3.7-py3.9.egg/nose/case.py", line 198, in runTest
    self.test(*self.arg)
  File "/mnt/home/quancs/projects/SRMRpy/test/test_srmr.py", line 32, in test_srmr_slow_norm
    assert np.allclose(ratio_norm, correct_ratios[3], rtol=1e-6, atol=1e-12), np.max(np.abs(ratio_norm - correct_ratios[3]))
AssertionError: 0.018230563202959793

----------------------------------------------------------------------
Ran 4 tests in 0.482s

FAILED (failures=4)
Test failed: <unittest.runner.TextTestResult run=4 errors=0 failures=4>
error: Test failed: <unittest.runner.TextTestResult run=4 errors=0 failures=4>

@Borda
Copy link
Contributor Author

Borda commented Jul 3, 2023

could be related to #7

@Borda
Copy link
Contributor Author

Borda commented Sep 19, 2023

@jfsantos could you pls accept this CI addition so we can validate the actual state as well as any future contributions? 🐰

@Borda
Copy link
Contributor Author

Borda commented Jun 5, 2024

ok, it seems we will need to move to our own fork... :(

@jfsantos
Copy link
Owner

I apologize for the delay in replying. SRMRpy is a package I wrote almost 10 years ago and do not really actively maintain, but I am happy to include patches! Can you summarize in which state the current contribution is?

To reply to one of your questions in the thread, I test SRMRpy locally by comparing its results to the original MATLAB implementation when applicable (SRMRpy does have features the original SRMRtoolbox does not, like using the fast gammatone filterbank implementation). The results do not need to be exactly the same but close enough for me to consider the implementation valid. I also usually do a sanity check by running it on the same file while increasing the amount of added noise and making sure the metric is going down accordingly.

@Borda
Copy link
Contributor Author

Borda commented Jul 14, 2024

I apologize for the delay in replying. SRMRpy is a package I wrote almost 10 years ago and do not really actively maintain, but I am happy to include patches! Can you summarize in which state the current contribution is?

It about adding automated testing you you do not need to manually run all tests locally and just see the results and so simplify any future fix/update approval

To reply to one of your questions in the thread, I test SRMRpy locally by comparing its results to the original MATLAB implementation when applicable (SRMRpy does have features the original SRMRtoolbox does not, like using the fast gammatone filterbank implementation). The results do not need to be exactly the same but close enough for me to consider the implementation valid. I also usually do a sanity check by running it on the same file while increasing the amount of added noise and making sure the metric is going down accordingly.

so is it safe also assume that all testing results via Matlab are static so they can be just dumped a file and used for testing/validation...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants