Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Require numpy<1.19.0 to satisfy requirements of tensorflow #620

Merged
merged 1 commit into from
Oct 7, 2020

Conversation

stweil
Copy link
Contributor

@stweil stweil commented Oct 1, 2020

Signed-off-by: Stefan Weil [email protected]

@codecov-commenter
Copy link

codecov-commenter commented Oct 1, 2020

Codecov Report

Merging #620 into master will decrease coverage by 0.10%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #620      +/-   ##
==========================================
- Coverage   84.96%   84.86%   -0.11%     
==========================================
  Files          52       52              
  Lines        2940     2940              
  Branches      571      571              
==========================================
- Hits         2498     2495       -3     
- Misses        332      335       +3     
  Partials      110      110              
Impacted Files Coverage Δ
ocrd/ocrd/resolver.py 93.40% <0.00%> (-3.30%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cb0f94b...ac86b8e. Read the comment docs.

@stweil
Copy link
Contributor Author

stweil commented Oct 1, 2020

It works, but only for a moment. As we force a reinstall for all pip installations, numpy is soon replaced with a newer version again. So this is not a working fix, sorry.

@stweil
Copy link
Contributor Author

stweil commented Oct 1, 2020

@bertsky, what was the reason why Makefile currently uses --force-reinstall? The comment for that code is rather short: "install again forcefully without depds (to ensure the binary itself updates)".

@bertsky
Copy link
Collaborator

bertsky commented Oct 1, 2020

what was the reason why Makefile currently uses --force-reinstall? The comment for that code is rather short: "install again forcefully without depds (to ensure the binary itself updates)".

It is a way to allow make to see what pip is doing. When we pip install successfully (after it was deemed necessary by make), it is not guaranteed that the actual entry point (script file) which we model as target in make gets updated. The timestamp is necessary to avoid re-making (e.g. multiple scripts for one module) and force propagation (e.g. other module depends on script). So by doing a normal installation with all dependencies followed by a forced re-installation without dependencies we ensure all scripts are new.

I know the pip install && pip install --no-deps --force-reinstall recipe is clunky, but have not conceived of a better solution yet. There is no unique file that we can count on gets always installed by pip (as there are too many different packaging metadata conventions/cases). We could touch -c $@ instead, but that just has different problems. What if the module changed the name of a script? What if pip failed but did update the script?

@bertsky
Copy link
Collaborator

bertsky commented Oct 1, 2020

I think the elephant in the room here is still pip being unable to solve conflicting requirements. If core now wants >=1.17, <1.19 but TF wants <2.0,>=1.16.0, why don't we end up with the former? Instead, the core request is processed first, giving us some 1.18, but then the TF request overrides it, giving us 1.19. (Correct?)

Maybe we can (temporarily) solve this by making numpy >=1.17, <1.19 required by all our modules themselves, which should be the last request encountered by pip?

@stweil
Copy link
Contributor Author

stweil commented Oct 4, 2020

TF wants <2.0

TF wants <1.19.0. That's why I added that for core here.

@bertsky
Copy link
Collaborator

bertsky commented Oct 7, 2020

TF wants <1.19.0. That's why I added that for core here.

Right, my hypothesis was wrong. It does not happen like that, but because of the other dependencies that pip happens to process after it sees our >=1.17, <1.19 from ocrd_utils. So the elephant is still the same (pip's non-reproducible order-dependent conflict resolution).

For example, I saw after ocrd_utils:

  • numpy>=1.13.3 from PyYAML
  • numpy>=1.17.0 from pyrsistent

But more pressingly, our core install recipe seems to be wrong, because it also brings in numpy>=1.17.0 for the older ocrd_models and ocrd_modelfactory via PyPI. Frankly, I have no idea why in #351 we used a different recipe (i.e. pip install --force-reinstall followed by pip install --no-deps) than in the other rules (i.e. pip install followed by pip install --no-deps --force-reinstall). If I exchange that, the problem goes away – even without the need for PIP_OPTIONS=--use-feature=2020-resolver).

See OCR-D/ocrd_all#204

@kba this is urgent AFAICT.

@kba kba merged commit 0ae4bb7 into OCR-D:master Oct 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants