-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix failures in CI ahead of first release, drop Python 3.5 #11
Conversation
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
I'm debugging this locally to see what I'm missing. |
Are we sure that when the wheels are being tested, the flagser directory is never available? |
Yes, this is what happens in local tests too. The point is that we are testing the shipped wheels and they do not come with the |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
@gtauzin I believe this is ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a remaining question @gtauzin. What is the meaning of the one-line description of saveflag
?
Construct a sparse matrix from diagonals.
But it seems that a) this function's main purpose is to a actually save a .flag
file; b) non-diagonal elements are also dealt with (?)
What does this implement/fix? Explain your changes.
setup.py
,azure-pipelines.yml
andREADME.rst
).RELEASE.rst
,GOVERNANCE.rst
,setup.py
,CODE_AUTHORS.rst
, andLICENSE
.setup.cfg
to avoid apytest
warning.doc/conf.py
to avoid a warning.pyflagser/flagser.py
andpyflagser/flagio.py
.azure-pipelines.yml
as we are unlikely to do nightlies for this project.azure-pipelines.yml
.conftest.py
file inpyflagser/tests
where a command line option forpytest
named--webdl
is defined. When the user passes it, flag files are downloaded and stored in a temporary folder from a remote (public) bucket. This is useful if one wishes to runpytest
on installed wheels, as the.flag
files needed would not typically be found in../../flagser/test
relative to the location of the test files (e.g.pyflagser/tests/test_flagio.py
). The use ofpytest_generate_tests
replaces the use ofpytest.mark.parametrize
in the test files as it is a well-known issue in pytest that fixtures cannot be passed topytest.mark.parametrize
, so the original idea of defining the locations of.flag
files (whether they are downloaded or not) via a fixture then passed topytest.mark.parametrize
could not be implemented.__main__.py
and aconftest.py
file inpyflagser/tests
to be able to run tests on installed wheels using the commandpython -m pyflagser.tests --webdl --no-cov --no-coverage-upload
. The reasonpyflagser --pyargs pyflagser --webdl
could not be made to work is that it is a well-known issue (see also here) that command-line options defined in a conftest cannot generically be found whenpytest --pyargs
is used.