Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving failures at ResolutionTooDeep to include more context #11480

Open
1 task done
vjmp opened this issue Oct 3, 2022 · 49 comments
Open
1 task done

Improving failures at ResolutionTooDeep to include more context #11480

vjmp opened this issue Oct 3, 2022 · 49 comments
Labels
C: dependency resolution About choosing which dependencies to install C: error messages Improving error messages resolution: known limitation Behaviour is not ideal, but it's a well-known issue that we cannot resolve type: performance Commands take too long to run

Comments

@vjmp
Copy link

vjmp commented Oct 3, 2022

Description

Example of pip backtracking failure

Repo for this example

... can be found there: https://github.com/vjmp/pipbacktracking
and issues can be found there: https://github.com/pypa/pip/issues

Failing case description

Installing (internally conflicting) "requirements.txt" which has lots of
transient dependencies, with simple command like pip install -r requirements.txt
with "very simple" looking content of "requirements.txt" looking like:

pywebview[qt]==3.6.2
rpaframework==15.6.0

This will take long time to fail, like 4+ hours.

And note that this specific example applies only on Linux environments.
But I think problem is general, and "old, previously working" requirement sets
can get "rotten" over time, as dependency "future" takes "wrong" turn. This
is because resolver works from latest to oldest, and even one few versions of
some required dependencies can derail resolver into backtracking "mode".

Context of our problem space.

Here are some things, that make this problem to Robocorp customers.

  • machines executing "pip install" can be fast or slow (or very slow)
  • pip version can be anything old or new (backward compatible generic usage)
  • pip environment setup time is "billable" time, so "fail fast" is cheaper
    in monetary terms than "fail 4+ hours later on total environment build
    failure"
  • automation is setting up environment, not humans
  • our tooling for automation in our rcc which is used here to also make this
    failure as repeatable process
  • and general context for automation is RPA (robotic process automation) so
    processes should be repeatable and reliable and not break, even if time passes

Problem with backtracking

It is very slow to fail.

Currently happy path works (fast enough), but if you derail resolver to unbeaten
path, then resolution takes long time, because in pip source
https://github.com/pypa/pip/blob/main/src/pip/_internal/resolution/resolvelib/resolver.py#L91
there is magical internal variable try_to_avoid_resolution_too_deep = 2000000
which causes very long search until it fails.

Brute force search for possibly huge search space.

When package, like rpaframework below, has something around 100 dependencies
it its dependency tree, even happy path resolution takes 100+ rounds of pip
dependency resolution to find it. When backtracking, (just one) processor
becomes 100% busy for backtracking work.

In automation, there is no "human" to press "Control-C".

INFO: pip is looking at multiple versions of selenium to determine which
version is compatible with other requirements. This could take a while.

and ...

INFO: This is taking longer than usual. You might need to provide the
dependency resolver with stricter constraints to reduce runtime.
See https://pip.pypa.io/warnings/backtracking for guidance.
If you want to abort this run, press Ctrl + C.

... are nice for pip to inform user that it is taking longer than usual, but
in our customers automation cases, there is nobody who could see those, or
to press that "Ctrl + C".

This could be improved, if there would be environment variable like
MAX_PIP_RESOLUTION_ROUNDS instead of having hard coded 2000000 internal limit.
Also adding this as environment variable (instead of command line option is
better backwards compatibility, since "extra" environment variable does not
kill old pip version commands, but CLI option will).

Basic setup

What is needed:

  • a linux machine
  • content of repo containing this README.md file
  • rcc executable (optional, but useful, and if you have it, you don't have
    to manually install following two things ...)
  • python3, in our case we have tested 3.9.13
  • pip, in our case we have tested 22.1.2 (but mostly anything after 20.3 has
    this feature; this current example uses pip v22.2.2)

Example code

You need rcc to run these examples. Or do manual environment setup if you will.

You can download rcc binaries from https://downloads.robocorp.com/rcc/releases/index.html
or if you want to more information, see https://github.com/robocorp/rcc

Success case (just for reference)

To run success case as what normal user sees, use this:

rcc run --task pass

And to see debugging output, use this:

rcc run --dev --task pass

Actual failure case (point of this demo)

To run failing case as what normal user sees, use this ... and have patience to wait:

rcc run --task fail

And to see debugging output, use this ... and have patience to wait:

rcc run --dev --task fail

Expected behavior

Faster (and configurable) failure on pip install on complex/big dependency tree.

pip version

22.2.2

Python version

3.9.13

OS

Linux

How to Reproduce

  1. Clone repository from: https://github.com/vjmp/pipbacktracking
  2. Download rcc from: https://downloads.robocorp.com/rcc/releases/index.html
  3. Make it executable and note location
  4. Change to "pipbacktracking" repo directory, and run command /path/to/rcc run --task fail

Note: no need to install specific python or pip versions, if you use these instructions.

Output

$ git clone https://github.com/vjmp/pipbacktracking
... your normal "git" output (not interesting)
$ cd pipbacktracking
$ curl -o bin/rcc https://downloads.robocorp.com/rcc/releases/v11.27.3/linux64/rcc
... your normal  curl output (not interesting)
$ chmod 755 bin/rcc
$ bin/rcc run --task fail
"/home/user/tmp/redo/pipbacktracking/robot.yaml" as robot.yaml is:
tasks:
  pass:
    shell: user_flow.sh pass
  fail:
    shell: user_flow.sh fail

devTasks:
  pass:
    shell: debug_flow.sh pass
  fail:
    shell: debug_flow.sh fail

condaConfigFile: conda.yaml
artifactsDir: temp
PATH:
  - bin
PYTHONPATH:
ignoreFiles:
  - .gitignore

####  Progress: 01/13  v11.27.3     0.009s  Fresh [private mode] holotree environment 66eec3ee-3650-220e-41fa-1b2f1f5e07c0.
####  Progress: 02/13  v11.27.3     0.001s  Holotree blueprint is "48b3e3ef4c244a3e" [linux_amd64].
####  Progress: 12/13  v11.27.3     0.294s  Restore space from library [with 7 workers].
Installation plan is: /home/user/.robocorp/holotree/59acff1_5a1fac3_9fcd2534/rcc_plan.log
Environment configuration descriptor is: /home/user/.robocorp/holotree/59acff1_5a1fac3_9fcd2534/identity.yaml
####  Progress: 13/13  v11.27.3     0.206s  Fresh holotree done [with 7 workers].
Wanted  Version  Origin  |  No.  |  Available         Version    Origin       |  Status
------  -------  ------  +  ---  +  ---------         -------    ------       +  ------
-       -        -       |    1  |  _libgcc_mutex     0.1        conda-forge  |  N/A
-       -        -       |    2  |  _openmp_mutex     4.5        conda-forge  |  N/A
-       -        -       |    3  |  bzip2             1.0.8      conda-forge  |  N/A
-       -        -       |    4  |  ca-certificates   2022.9.24  conda-forge  |  N/A
-       -        -       |    5  |  ld_impl_linux-64  2.36.1     conda-forge  |  N/A
-       -        -       |    6  |  libffi            3.4.2      conda-forge  |  N/A
-       -        -       |    7  |  libgcc-ng         12.1.0     conda-forge  |  N/A
-       -        -       |    8  |  libgomp           12.1.0     conda-forge  |  N/A
-       -        -       |    9  |  libnsl            2.0.0      conda-forge  |  N/A
-       -        -       |   10  |  libsqlite         3.39.3     conda-forge  |  N/A
-       -        -       |   11  |  libuuid           2.32.1     conda-forge  |  N/A
-       -        -       |   12  |  libzlib           1.2.12     conda-forge  |  N/A
-       -        -       |   13  |  ncurses           6.3        conda-forge  |  N/A
-       -        -       |   14  |  openssl           3.0.5      conda-forge  |  N/A
-       -        -       |   15  |  pip               22.2.2     conda-forge  |  N/A
-       -        -       |   16  |  python            3.9.13     conda-forge  |  N/A
-       -        -       |   17  |  readline          8.1.2      conda-forge  |  N/A
-       -        -       |   18  |  setuptools        65.4.0     conda-forge  |  N/A
-       -        -       |   19  |  sqlite            3.39.3     conda-forge  |  N/A
-       -        -       |   20  |  tk                8.6.12     conda-forge  |  N/A
-       -        -       |   21  |  tzdata            2022d      conda-forge  |  N/A
-       -        -       |   22  |  wheel             0.37.1     conda-forge  |  N/A
-       -        -       |   23  |  xz                5.2.6      conda-forge  |  N/A
------  -------  ------  +  ---  +  ---------         -------    ------       +  ------
Wanted  Version  Origin  |  No.  |  Available         Version    Origin       |  Status

--
+ pip install -r requirements_fail.txt
Collecting pywebview[qt]==3.6.2
  Using cached pywebview-3.6.2-py3-none-any.whl (351 kB)
Collecting rpaframework==15.6.0
  Using cached rpaframework-15.6.0-py3-none-any.whl (534 kB)
Collecting proxy-tools
  Using cached proxy_tools-0.1.0-py3-none-any.whl
Collecting PyQt5
  Using cached PyQt5-5.15.7-cp37-abi3-manylinux1_x86_64.whl (8.4 MB)
Collecting pyqtwebengine
  Using cached PyQtWebEngine-5.15.6-cp37-abi3-manylinux1_x86_64.whl (230 kB)
Collecting QtPy
  Using cached QtPy-2.2.0-py3-none-any.whl (82 kB)
Collecting PySocks!=1.5.7,<2.0.0,>=1.5.6
  Using cached PySocks-1.7.1-py3-none-any.whl (16 kB)
Collecting netsuitesdk<2.0.0,>=1.1.0
  Using cached netsuitesdk-1.24.0-py3-none-any.whl (31 kB)
Collecting python-xlib>=0.17
  Using cached python_xlib-0.31-py2.py3-none-any.whl (179 kB)
Collecting xlwt<2.0.0,>=1.3.0
  Using cached xlwt-1.3.0-py2.py3-none-any.whl (99 kB)
Collecting docutils
  Using cached docutils-0.19-py3-none-any.whl (570 kB)
Collecting pillow<10.0.0,>=9.1.1
  Using cached Pillow-9.2.0-cp39-cp39-manylinux_2_28_x86_64.whl (3.2 MB)
Collecting openpyxl<4.0.0,>=3.0.9
  Using cached openpyxl-3.0.10-py2.py3-none-any.whl (242 kB)
Collecting htmldocx<0.0.7,>=0.0.6
  Using cached htmldocx-0.0.6-py3-none-any.whl (9.5 kB)
Collecting graphviz<0.14.0,>=0.13.2
  Using cached graphviz-0.13.2-py2.py3-none-any.whl (17 kB)
Collecting exchangelib<5.0.0,>=4.5.1
  Using cached exchangelib-4.7.6-py2.py3-none-any.whl (236 kB)
Collecting xlutils<3.0.0,>=2.0.0
  Using cached xlutils-2.0.0-py2.py3-none-any.whl (55 kB)
Collecting click<9.0.0,>=8.1.2
  Using cached click-8.1.3-py3-none-any.whl (96 kB)
Collecting chardet<4.0.0,>=3.0.0
  Using cached chardet-3.0.4-py2.py3-none-any.whl (133 kB)
Collecting java-access-bridge-wrapper<0.10.0,>=0.9.4
  Using cached java_access_bridge_wrapper-0.9.5-py3-none-any.whl (28 kB)
Collecting pyperclip<2.0.0,>=1.8.0
  Using cached pyperclip-1.8.2-py3-none-any.whl
Collecting tzlocal<3.0,>=2.1
  Using cached tzlocal-2.1-py2.py3-none-any.whl (16 kB)
Collecting xlrd<3.0.0,>=2.0.1
  Using cached xlrd-2.0.1-py2.py3-none-any.whl (96 kB)
Collecting rpaframework-pdf<5.0.0,>=4.1.0
  Using cached rpaframework_pdf-4.1.0-py3-none-any.whl (609 kB)
Collecting PyYAML<6.0.0,>=5.4.1
  Using cached PyYAML-5.4.1-cp39-cp39-manylinux1_x86_64.whl (630 kB)
Collecting hubspot-api-client<5.0.0,>=4.0.6
  Using cached hubspot_api_client-4.0.6-py3-none-any.whl (1.9 MB)
Collecting robotframework-requests<0.10.0,>=0.9.1
  Using cached robotframework_requests-0.9.3-py3-none-any.whl (21 kB)
Collecting robotframework-seleniumtestability<2.0.0,>=1.1.0
  Using cached robotframework_seleniumtestability-1.1.0-py2.py3-none-any.whl
Collecting robotframework!=4.0.1,<6.0.0,>=4.0.0
  Using cached robotframework-5.0.1-py3-none-any.whl (639 kB)
Collecting notifiers<2.0.0,>=1.2.1
  Using cached notifiers-1.3.3-py3-none-any.whl (43 kB)
Collecting jsonpath-ng<2.0.0,>=1.5.2
  Using cached jsonpath_ng-1.5.3-py3-none-any.whl (29 kB)
Collecting rpaframework-core<10.0.0,>=9.1.0
  Using cached rpaframework_core-9.1.0-py3-none-any.whl (38 kB)
Collecting pynput-robocorp-fork<5.0.0,>=4.0.0
  Using cached pynput_robocorp_fork-4.0.0-py2.py3-none-any.whl (94 kB)
Collecting tweepy<4.0.0,>=3.8.0
  Using cached tweepy-3.10.0-py2.py3-none-any.whl (30 kB)
Collecting simple_salesforce<2.0.0,>=1.0.0
  Using cached simple_salesforce-1.12.2-py2.py3-none-any.whl (120 kB)
Collecting mss<7.0.0,>=6.0.0
  Using cached mss-6.1.0-py3-none-any.whl (76 kB)
Collecting robotframework-pythonlibcore<4.0.0,>=3.0.0
  Using cached robotframework_pythonlibcore-3.0.0-py2.py3-none-any.whl (9.9 kB)
Collecting rpaframework-dialogs<4.0.0,>=3.0.0
  Using cached rpaframework_dialogs-3.0.1-py3-none-any.whl (18 kB)
Collecting cryptography<4.0.0,>=3.3.1
  Using cached cryptography-3.4.8-cp36-abi3-manylinux_2_24_x86_64.whl (3.0 MB)
Collecting tenacity<9.0.0,>=8.0.1
  Using cached tenacity-8.1.0-py3-none-any.whl (23 kB)
Collecting robotframework-seleniumlibrary<6.0.0,>=5.1.0
  Using cached robotframework_seleniumlibrary-5.1.3-py2.py3-none-any.whl (94 kB)
Collecting selenium<4.0.0,>=3.141.0
  Using cached selenium-3.141.0-py2.py3-none-any.whl (904 kB)
Collecting cffi>=1.12
  Using cached cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (441 kB)
Collecting requests-ntlm>=0.2.0
  Using cached requests_ntlm-1.1.0-py2.py3-none-any.whl (5.7 kB)
Collecting cached-property
  Using cached cached_property-1.5.2-py2.py3-none-any.whl (7.6 kB)
Collecting pygments
  Using cached Pygments-2.13.0-py3-none-any.whl (1.1 MB)
Collecting defusedxml>=0.6.0
  Using cached defusedxml-0.7.1-py2.py3-none-any.whl (25 kB)
Collecting requests>=2.7
  Using cached requests-2.28.1-py3-none-any.whl (62 kB)
Collecting oauthlib
  Using cached oauthlib-3.2.1-py3-none-any.whl (151 kB)
Collecting lxml>3.0
  Using cached lxml-4.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (7.0 MB)
Collecting requests-oauthlib
  Using cached requests_oauthlib-1.3.1-py2.py3-none-any.whl (23 kB)
Collecting tzdata
  Using cached tzdata-2022.4-py2.py3-none-any.whl (336 kB)
Collecting dnspython>=2.0.0
  Using cached dnspython-2.2.1-py3-none-any.whl (269 kB)
Collecting isodate
  Using cached isodate-0.6.1-py2.py3-none-any.whl (41 kB)
Collecting python-docx>=0.8.10
  Using cached python_docx-0.8.11-py3-none-any.whl
Collecting beautifulsoup4>=4.7.0
  Using cached beautifulsoup4-4.11.1-py3-none-any.whl (128 kB)
Collecting certifi
  Using cached certifi-2022.9.24-py3-none-any.whl (161 kB)
Collecting python-dateutil
  Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting urllib3>=1.15
  Using cached urllib3-1.26.12-py2.py3-none-any.whl (140 kB)
Collecting six>=1.10
  Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting decorator
  Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting ply
  Using cached ply-3.11-py2.py3-none-any.whl (49 kB)
Collecting zeep
  Using cached zeep-4.1.0-py2.py3-none-any.whl (100 kB)
Collecting jsonschema<5.0.0,>=4.4.0
  Using cached jsonschema-4.16.0-py3-none-any.whl (83 kB)
Collecting et-xmlfile
  Using cached et_xmlfile-1.1.0-py3-none-any.whl (4.7 kB)
Collecting furl
  Using cached furl-2.1.3-py2.py3-none-any.whl (20 kB)
Collecting wrapt
  Using cached wrapt-1.14.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (77 kB)
Collecting webdrivermanager<0.11.0,>=0.10.0
  Using cached webdrivermanager-0.10.0-py2.py3-none-any.whl
Collecting robocorp-dialog<0.6.0,>=0.5.3
  Using cached robocorp_dialog-0.5.3-py3-none-any.whl (22.2 MB)
Collecting fpdf2<3.0.0,>=2.5.2
  Using cached fpdf2-2.5.7-py2.py3-none-any.whl (237 kB)
Collecting pypdf2!=1.27.10,!=1.27.11,<2.0.0,>=1.27.4
  Using cached PyPDF2-1.28.6-py3-none-any.whl (87 kB)
Collecting pdfminer.six==20201018
  Using cached pdfminer.six-20201018-py3-none-any.whl (5.6 MB)
Collecting sortedcontainers
  Using cached sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)
Collecting authlib
  Using cached Authlib-1.1.0-py2.py3-none-any.whl (214 kB)
Collecting pytz
  Using cached pytz-2022.4-py2.py3-none-any.whl (500 kB)
Collecting PyQt5-sip<13,>=12.11
  Using cached PyQt5_sip-12.11.0-cp39-cp39-manylinux1_x86_64.whl (357 kB)
Collecting PyQt5-Qt5>=5.15.0
  Using cached PyQt5_Qt5-5.15.2-py3-none-manylinux2014_x86_64.whl (59.9 MB)
Collecting PyQtWebEngine-Qt5>=5.15.0
  Using cached PyQtWebEngine_Qt5-5.15.2-py3-none-manylinux2014_x86_64.whl (67.5 MB)
Collecting packaging
  Using cached packaging-21.3-py3-none-any.whl (40 kB)
Collecting soupsieve>1.2
  Using cached soupsieve-2.3.2.post1-py3-none-any.whl (37 kB)
Collecting pycparser
  Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Collecting fonttools
  Downloading fonttools-4.37.4-py3-none-any.whl (960 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 960.8/960.8 kB 1.3 MB/s eta 0:00:00
Collecting svg.path
  Using cached svg.path-6.2-py2.py3-none-any.whl (40 kB)
Collecting attrs>=17.4.0
  Using cached attrs-22.1.0-py2.py3-none-any.whl (58 kB)
Collecting pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0
  Using cached pyrsistent-0.18.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (115 kB)
Collecting charset-normalizer<3,>=2
  Using cached charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting idna<4,>=2.5
  Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting ntlm-auth>=1.0.2
  Using cached ntlm_auth-1.5.0-py2.py3-none-any.whl (29 kB)
INFO: pip is looking at multiple versions of requests[socks] to determine which version is compatible with other requirements. This could take a while.
Collecting requests[socks]>=2.11.1
  Using cached requests-2.28.0-py3-none-any.whl (62 kB)
INFO: pip is looking at multiple versions of oauthlib to determine which version is compatible with other requirements. This could take a while.
Collecting oauthlib
  Using cached oauthlib-3.2.0-py3-none-any.whl (151 kB)
INFO: pip is looking at multiple versions of requests-oauthlib to determine which version is compatible with other requirements. This could take a while.
Collecting requests-oauthlib
  Using cached requests_oauthlib-1.3.0-py2.py3-none-any.whl (23 kB)
INFO: pip is looking at multiple versions of requests-ntlm to determine which version is compatible with other requirements. This could take a while.
Collecting requests-ntlm>=0.2.0
  Using cached requests_ntlm-1.0.0-py2.py3-none-any.whl (5.2 kB)
INFO: pip is looking at multiple versions of certifi to determine which version is compatible with other requirements. This could take a while.
Collecting certifi
  Using cached certifi-2022.9.14-py3-none-any.whl (162 kB)
INFO: pip is looking at multiple versions of requests to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of python-docx to determine which version is compatible with other requirements. This could take a while.
Collecting python-docx>=0.8.10
  Using cached python-docx-0.8.10.tar.gz (5.5 MB)
  Preparing metadata (setup.py): started
  Preparing metadata (setup.py): finished with status 'done'
INFO: pip is looking at multiple versions of pyqtwebengine-qt5 to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of pyqt5-sip to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of pyqt5-qt5 to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of pypdf2 to determine which version is compatible with other requirements. This could take a while.
Collecting pypdf2!=1.27.10,!=1.27.11,<2.0.0,>=1.27.4
  Using cached PyPDF2-1.28.5-py3-none-any.whl (87 kB)
INFO: pip is looking at multiple versions of lxml to determine which version is compatible with other requirements. This could take a while.
Collecting lxml>3.0
  Using cached lxml-4.9.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (7.0 MB)
INFO: pip is looking at multiple versions of jsonschema to determine which version is compatible with other requirements. This could take a while.
Collecting jsonschema<5.0.0,>=4.4.0
  Using cached jsonschema-4.15.0-py3-none-any.whl (82 kB)
INFO: pip is looking at multiple versions of fpdf2 to determine which version is compatible with other requirements. This could take a while.
Collecting fpdf2<3.0.0,>=2.5.2
  Using cached fpdf2-2.5.6-py2.py3-none-any.whl (233 kB)
INFO: pip is looking at multiple versions of dnspython to determine which version is compatible with other requirements. This could take a while.
Collecting dnspython>=2.0.0
  Using cached dnspython-2.2.0-py3-none-any.whl (266 kB)
INFO: pip is looking at multiple versions of defusedxml to determine which version is compatible with other requirements. This could take a while.
Collecting defusedxml>=0.6.0
  Using cached defusedxml-0.7.0-py2.py3-none-any.whl (25 kB)
INFO: pip is looking at multiple versions of cffi to determine which version is compatible with other requirements. This could take a while.
Collecting cffi>=1.12
  Using cached cffi-1.15.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (444 kB)
INFO: pip is looking at multiple versions of beautifulsoup4 to determine which version is compatible with other requirements. This could take a while.
Collecting beautifulsoup4>=4.7.0
  Using cached beautifulsoup4-4.11.0-py3-none-any.whl (71 kB)
INFO: pip is looking at multiple versions of qtpy to determine which version is compatible with other requirements. This could take a while.
Collecting QtPy
  Using cached QtPy-2.1.0-py3-none-any.whl (68 kB)
INFO: pip is looking at multiple versions of pywebview to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of pyqtwebengine to determine which version is compatible with other requirements. This could take a while.
Collecting pyqtwebengine
  Using cached PyQtWebEngine-5.15.5-cp36-abi3-manylinux1_x86_64.whl (228 kB)
INFO: pip is looking at multiple versions of pyqt5 to determine which version is compatible with other requirements. This could take a while.
Collecting PyQt5
  Using cached PyQt5-5.15.6-cp36-abi3-manylinux1_x86_64.whl (8.3 MB)
INFO: pip is looking at multiple versions of proxy-tools to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of docutils to determine which version is compatible with other requirements. This could take a while.
Collecting docutils
  Using cached docutils-0.18.1-py2.py3-none-any.whl (570 kB)
INFO: pip is looking at multiple versions of xlwt to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of xlutils to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of xlrd to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of tzlocal to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of tweepy to determine which version is compatible with other requirements. This could take a while.
Collecting tweepy<4.0.0,>=3.8.0
  Using cached tweepy-3.9.0-py2.py3-none-any.whl (30 kB)
INFO: pip is looking at multiple versions of tenacity to determine which version is compatible with other requirements. This could take a while.
Collecting tenacity<9.0.0,>=8.0.1
  Using cached tenacity-8.0.1-py3-none-any.whl (24 kB)
INFO: pip is looking at multiple versions of simple-salesforce to determine which version is compatible with other requirements. This could take a while.
Collecting simple_salesforce<2.0.0,>=1.0.0
  Using cached simple_salesforce-1.12.1-py2.py3-none-any.whl (119 kB)
INFO: pip is looking at multiple versions of selenium to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of pdfminer-six to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of rpaframework-pdf to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of rpaframework-dialogs to determine which version is compatible with other requirements. This could take a while.
Collecting rpaframework-dialogs<4.0.0,>=3.0.0
  Using cached rpaframework_dialogs-3.0.0-py3-none-any.whl (18 kB)
INFO: pip is looking at multiple versions of pywebview to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of rpaframework-core to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of robotframework-seleniumtestability to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of robotframework-seleniumlibrary to determine which version is compatible with other requirements. This could take a while.
Collecting robotframework-seleniumlibrary<6.0.0,>=5.1.0
  Using cached robotframework_seleniumlibrary-5.1.2-py2.py3-none-any.whl (94 kB)
  Using cached robotframework_seleniumlibrary-5.1.1-py2.py3-none-any.whl (94 kB)
  Using cached robotframework_seleniumlibrary-5.1.0-py2.py3-none-any.whl (94 kB)
INFO: pip is looking at multiple versions of rpaframework-dialogs to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of robotframework-requests to determine which version is compatible with other requirements. This could take a while.
Collecting robotframework-requests<0.10.0,>=0.9.1
  Using cached robotframework_requests-0.9.2-py3-none-any.whl (20 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
INFO: pip is looking at multiple versions of rpaframework-core to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of robotframework-seleniumtestability to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of robotframework-seleniumlibrary to determine which version is compatible with other requirements. This could take a while.
  Using cached robotframework_requests-0.9.1-py3-none-any.whl (20 kB)
INFO: pip is looking at multiple versions of robotframework-pythonlibcore to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of robotframework to determine which version is compatible with other requirements. This could take a while.
Collecting robotframework!=4.0.1,<6.0.0,>=4.0.0
  Using cached robotframework-5.0-py3-none-any.whl (638 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
  Using cached robotframework-4.1.3-py2.py3-none-any.whl (659 kB)
INFO: pip is looking at multiple versions of robotframework-requests to determine which version is compatible with other requirements. This could take a while.
  Using cached robotframework-4.1.2-py2.py3-none-any.whl (659 kB)
  Using cached robotframework-4.1.1-py2.py3-none-any.whl (658 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
  Using cached robotframework-4.1-py2.py3-none-any.whl (657 kB)
  Using cached robotframework-4.0.3-py2.py3-none-any.whl (655 kB)
  Using cached robotframework-4.0.2-py2.py3-none-any.whl (655 kB)
INFO: pip is looking at multiple versions of robotframework-pythonlibcore to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of robotframework to determine which version is compatible with other requirements. This could take a while.
  Using cached robotframework-4.0-py2.py3-none-any.whl (653 kB)
INFO: pip is looking at multiple versions of pyyaml to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of python-xlib to determine which version is compatible with other requirements. This could take a while.
Collecting python-xlib>=0.17
  Using cached python_xlib-0.30-py2.py3-none-any.whl (178 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
  Using cached python_xlib-0.29-py2.py3-none-any.whl (176 kB)
  Using cached python_xlib-0.28-py2.py3-none-any.whl (176 kB)

....

.... and output will continue for next 4+ hours (of course depending on your network speed, machine performance, etc.)

Code of Conduct

@vjmp vjmp added S: needs triage Issues/PRs that need to be triaged type: bug A confirmed bug or unintended behavior labels Oct 3, 2022
@pfmoore
Copy link
Member

pfmoore commented Oct 3, 2022

Thanks for the detailed report. Unfortunately, some sets of requirements trigger exponential behaviour and while we might be able to alter the heuristics we use to "fix" those cases, we'll inevitably break a different case in the process.

There has been a lot of work done by volunteers (either pip maintainers or pip users) to examine possible improvements - examples are #10884, sarugaku/resolvelib#64, #10479, #10201. I suggest you read these for background. A search on the issue tracker for "is:issue resolver performance" should find plenty more.

In particular, #10479 (comment) might be relevant here.

Basically, though, the answer is that it's not even theoretically possible to avoid all pathological behaviour. We've done a lot of work algorithmically, and we have implemented heuristics to address the common cases of slowdown, but it's entirely possible that there are still improvements to be made, but we have to be extremely careful that we don't simply adjust the trade-offs in a way that hurts other users when we change things.

If you're interested, we'd welcome suggestions on changes that you think might help. Or if you find a specific feature of your example that might generalise, that would also be useful (as would suggesting approaches to address that feature).

In the meantime, however, the following approaches have been useful for others trying to work around long resolve times:

  1. Don't let this happen in CI, if you can avoid it. Test the install process locally, where you can Ctrl-C a long-running resolve, and identify the problem and any workaround before deploying it to CI.
  2. If you're building an application, pin your dependencies to exact versions, then the resolve doesn't have to do anything complicated anyway.
  3. If you have to depend on version ranges, use constraints files to eliminate parts of the solution space that you know won't lead to valid solves (for example, add constraints to stop pip considering very old versions of your dependencies).

@pfmoore pfmoore added C: dependency resolution About choosing which dependencies to install and removed type: bug A confirmed bug or unintended behavior S: needs triage Issues/PRs that need to be triaged labels Oct 3, 2022
@pfmoore
Copy link
Member

pfmoore commented Oct 3, 2022

@pypa/pip-committers I'm surprised we don't have a "type: performance" label. Also, there's nothing for "known limitation" - I was tempted to mark this as "not a bug" but that seems a bit strong.

I've added those two labels for this issue (and other resolver performance issues) - if anyone would prefer to handle this another way, please say so.

@pfmoore pfmoore added type: performance Commands take too long to run resolution: known limitation Behaviour is not ideal, but it's a well-known issue that we cannot resolve labels Oct 3, 2022
@vjmp
Copy link
Author

vjmp commented Oct 3, 2022

Bug part is that it takes too long, without control. So bug is that missing control on user side.

If we could just export MAX_PIP_RESOLUTION_ROUNDS=50000 for example (and that would override now hard coded try_to_avoid_resolution_too_deep = 2000000 definition) that would give us control over it.

We could then use this on our automation, and limit impact of time taken, and fail faster.

And yes, I understand problem space, that this is "feature" to do backtracking. It is fine when you have small dependency tree. We have that big tree, and just want to make it fail faster.

@pfmoore
Copy link
Member

pfmoore commented Oct 3, 2022

If we could just export MAX_PIP_RESOLUTION_ROUNDS=50000 for example (and that would override now hard coded try_to_avoid_resolution_too_deep = 2000000 definition) that would give us control over it.

Please read some of the other issues that I referenced. This was discussed extensively some time ago.

Or if you want to run pip with a maximum duration, there's the Unix timeout command (which I know next to nothing about, I found it via a Google search for "unix command to run subprocess with timeout"). The Python subprocess module lets you do the same thing, if you prefer to use Python for portability.

@kariharju
Copy link

Hi,

I think the only thing needed would be the ability to control that try_to_avoid_resolution_too_deep = 2000000 externally.

If that value could be controlled we could handle different cases we are seeing. In some cases, we do not want any backtracking; in others, 200 steps would make sense, and only in the most complex case would we even try with the full 2000000, just because that can mean 4+ hours of resolution time.
So instead of the hard-coded value, opening that up to a environment variable or a run flag would solve this.

The backtracking problem is really hard, and there are problems that I don't think it can (or even should) solve.
For example: "The resolution completed, but did I get what I asked for?".
So as an example, if the backtracking changes the major version of some dependency that can break the actual code, we would want to run. So in some cases, the resolution can succeed and fail at the same time 😉. Also, a resolution that takes over 30mins in most customer cases is a failure no matter what. The "correct" fix for that in every case should be to guide the developer to fix the dependencies, lock-down version, etc. In short not everything should be left to pip to solve.

So just having the ability to control how much backtracking is done would solve this issue.

@pfmoore
Copy link
Member

pfmoore commented Oct 11, 2022

I think the only thing needed would be the ability to control that try_to_avoid_resolution_too_deep = 2000000 externally.

Please read the other issues. This has been extensively discussed, and the issue is that there's no way to meaningfully know what a "reasonable" value is, without trying it. By the time you know you need to set the value smaller, it's too late - you've already taken 4 hours. And setting the value lower will never get a better result, it'll simply fail sooner (which is of no help if you've already determined that it's going to fail). In particular, the value has no useful link to how long pip is going to take to fail.

If you're convinced that setting this value is useful, I suggest you write a wrapper script that monkeypatches pip to let you do so, and try that out. If you find good evidence that setting the value is of benefit, feel free to present an argument based on actual real-world use cases.

Also note that if you want to set a time limit on how long pip will run when trying to resolve an install, you're better using something like the Unix timeout command, which will let you do exactly that: timeout 10m python -m pip install .... (I'm not aware of a similar Windows command, but it should be possible to write something in Python that works the same way).

@pradyunsg
Copy link
Member

pradyunsg commented Oct 11, 2022

The "correct" fix for that in every case should be to guide the developer to fix the dependencies, lock-down version, etc. In short not everything should be left to pip to solve.

That's literally the documentation that https://pip.pypa.io/warnings/backtracking points to -- the link printed in the error messages.

This value is set so high, to ensure that people don't hit it in anything other than the most pathological cases.

@pradyunsg
Copy link
Member

If you'd like for this value to be set to something lower, that's... a tractable request -- I'm not opposed to the idea that we should reduce that number. :)

@pfmoore
Copy link
Member

pfmoore commented Oct 11, 2022

I'm not opposed to the idea that we should reduce that number. :)

To be clear, nor am I. I am uncomfortable about making it user-settable, though, as we'll just end up with people wanting advice on what to set it to. And if we do reduce the number, I'd like to see some evidence that any proposed new value is (in some sense) better than the current value.

@notatallshaw
Copy link
Member

notatallshaw commented Oct 16, 2022

As well as changing arbitrary variables like try_to_avoid_resolution_too_deep I think it's important to also give the user more information why it failed (e.g. it would be great if this could finally land: #11169) then users can hopefully improve their requirements/constraints when it does fail in a more targeted way. Also, of course, still be open to improved heuristics and techniques for backtracking.

As an aside I spent a few hours today reproducing the test case and looking if it could be improved with backjumping. A year ago I made a hacky branch to test backjumping (which I talked a bit about here).

Whether backjumping would have ultimately helped OP in general I am not sure, it certainly fails quicker on this specific requirements because it discards non-viable paths and fails by trying very old versions of pyparsing which fails to build the metadata because of syntax errors (which if I rebase my branch will cause Pip to error out). However after that the user will be a little stuck as the only indication on what the problematic packages are are the ones Pip had to download a lot and the one it errored out on, but these packages aren't the "root" of the backtracking problem and Pip doesn't give enough useful information in the logs to figure out which packages are the causing the pathological backtracking.

That said I think I will have time to work on open source again later this year and I will take another crack to see if I can definitively improve Pip's backtracking with backjumping or similar, I will use this as one of the test cases.

@pfmoore
Copy link
Member

pfmoore commented Oct 16, 2022

Pip doesn't give enough useful information in the logs to figure out which packages are the causing the pathological backtracking.

I think there's a related issue here as well. Understanding the cause of a pathological backtracking issue is a hard problem, and it's entirely possible that even with more information in the logs, many users simply won't have the knowledge or understanding to diagnose the issue and pinpoint the cause of the problem. So while there's unlikely to be any downside to improving the instrumentation of the resolution process, I fear that it won't actually improve things in a practical sense for affected end users.

@notatallshaw
Copy link
Member

So while there's unlikely to be any downside to improving the instrumentation of the resolution process, I fear that it won't actually improve things in a practical sense for affected end users.

While I agree with your general sentiment I would argue that if Pip gave enough information that some users could solve their issue for some cases of pathological backtracking that would be a big win over the current situation.

@uranusjr
Copy link
Member

uranusjr commented Oct 17, 2022

I wonder if it’d make sense to do optional telemetry only to people who set a custom resolution round. This may help collect real-world numbers while minimising the creepy aspect (it’s clearly an opt-in and you are always free to just use the default).

@aigarius
Copy link

We are seeing a related issue - in some environments we do not want the backtracking behaviour at all and being able to set try_to_avoid_resolution_too_deep=1 for those test jobs with an explicit environment variable or pip runtime option would be essential to avoid pip (incorrectly) selecting an old and completely obsolete version of a dependency and running tests against that (which will be useless anyway) instead of failing with a clear and explicit error message like - "hey, your requirements can not be satisfied because that one thing you depend on needs a newer Python version".

User-configurable setting is important because pip is being used in very different environments - in some the goal is to get a working environment at whatever cost, in others quickly failing to set up the environment (and producing an error message that explains why!) is actually more important. And only the user/admin/dev setting that environment up knows what they expect/want from pip.

@pfmoore
Copy link
Member

pfmoore commented Oct 17, 2022

I wonder if it’d make sense to do optional telemetry only to people who set a custom resolution round.

Do you mean actual telemetry here (as in, pip uploads usage data to a central server somewhere)? If so, then I didn't think that was a route we wanted to take. If we do want to consider adding telemetry to pip, then I think there's a lot of places I'd rather see us have usage reporting than the resolver. For example, logging how many people only install from wheels, how many people use extra indexes, how many people install to locations other than virtual environments, ...

If you just mean "opt in uploading of data", then why not just add a --resolver-details-log option that replaces the existing reporter with one that writes out every reporting event to a file. The user can opt in by using that flag to create a log, and then upload it manually to somewhere we specify. That would be just as useful in practice as "opt in telemetry" and avoids the whole debate about pip "phoning home".

We are seeing a related issue - in some environments we do not want the backtracking behaviour at all

What exactly do you mean by this? You only ever want to consider the latest version on PyPI? That's easy using a simple filtering proxy for PyPI. But I suspect it would very often just fail. Or you can fully pin requirements, which avoids all backtracking. Or something else? Most approaches I can think of would fail on pretty much anything that had any form of dependency overlap.

If you can describe the algorithm you'd expect pip to use when given a set of requirements and constraint files, which determines what to install without using backtracking at all, then we can consider that as an option. But remember that the reason we removed the old resolver (which worked without backtracking) was because it gave incorrect results - any proposed approach must give correct results or fail1.

Having said all of this, if you really want to try experimenting with try_to_avoid_resolution_too_deep=1, here's a totally unsupported, likely to break at a moment's notice, hack that you can run to see how well it works.

import pip._internal.resolution.resolvelib.resolver

orig_resolve = pip._internal.resolution.resolvelib.resolver.RLResolver.resolve

def hacked_resolve(self, requirements, max_rounds):
    print("In hacked resolve!!!")
    return orig_resolve(self, requirements, 1)

pip._internal.resolution.resolvelib.resolver.RLResolver.resolve = hacked_resolve

# Copied from pip/__main__.py

import sys, os, warnings
if sys.path[0] in ("", os.getcwd()):
    sys.path.pop(0)

warnings.filterwarnings(
    "ignore", category=DeprecationWarning, module=".*packaging\\.version"
)
from pip._internal.cli.main import main as _main

sys.exit(_main())

I just tried it, and python hack.py requests failed with "pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 1". So it seems like it's pretty useless in practice...

Footnotes

  1. At least, when installing to an empty environment. Installing into an environment with packages already installed is a lot more complex, and even the new resolver doesn't take all cases into account in that situation.

@aigarius
Copy link

We are seeing a related issue - in some environments we do not want the backtracking behaviour at all

What exactly do you mean by this? You only ever want to consider the latest version on PyPI?

We have an internal package repository and during testing of secondary packages, it is pretty natural to have them depend on some of our primary internal packages during their verification jobs.

Say we have a package "basepacakge" that has had couple hundred release over the years and we have a "otherpackage" that is also being developed here. "otherpackage" just depends on "basepackage" without pinning (as it is generally expected to work on all versions). Recently we hit a bug where pip for some reason decided that it does not like to install the latest version of our "basepackage", so each time the verification run started, pip would spend lots of time to do a couple hundred requests to out internal package repository until it backtracked to a nearly 3 year old version of our basepackage that it liked and installed that as a dependency. Right after that it failed, because the other dependencies of "otherpackage" were pinned to versions that are more recent that pinned versions of that ancient basepackage version.

So we get a failure with pretty pointless error message about version conflict between and ancient and current version of some deeper dependency.

Instead we would like to know why pip was not installing the latest version of the requested package, as we would have expected it to. And at that point we would also like that job to fail then, because there is no point in testing against an old base version. You will just be seeing already fixed bugs and be really confused.

@pfmoore
Copy link
Member

pfmoore commented Oct 17, 2022

Instead we would like to know why pip was not installing the latest version of the requested package, as we would have expected it to. And at that point we would also like that job to fail then, because there is no point in testing against an old base version. You will just be seeing already fixed bugs and be really confused.

So set your repository index (or layer a proxy in front of it) to only publish the latest versions. You don't need (or want) to stop pip backtracking for this use case, as far as I can see.

@pradyunsg
Copy link
Member

@aigarius Can you confirm that you are using the latest version of pip?

Having said all of this, if you really want to try experimenting with try_to_avoid_resolution_too_deep=1

That would mean that you can't even have dependencies -- a round is the resolver evaluating a single candidate (or set of them). You won't be able to install more packages than set in that variable.

I'm personally fine if we implemented a --no-backtracking-in-resolve flag or something.

@pfmoore
Copy link
Member

pfmoore commented Oct 17, 2022

I'm personally fine if we implemented a --no-backtracking-in-resolve flag or something.

Agreed, that seems like a relatively easy option to describe/understand. I have my reservations about whether in practice, it would solve the issues people are having, but that's more a question for the people likely to use such an option. Would any of the people in this thread hitting problems be interested in creating a PR for this?

@pradyunsg
Copy link
Member

And if we do reduce the number, I'd like to see some evidence that any proposed new value is (in some sense) better than the current value.

Dropping a zero brings us down from 4 hours to ~30 minutes of trying to resolve (assuming similar per-round average timing).

I can see the argument that it is a more reasonable amount of time for failure in automation. :)

@aigarius
Copy link

--no-backtracking-in-resolve would likely solve our issues (or more specifically, expose where we have wrong assumptions in our processes/code)

We are seeing the unexpected behavior when using pip 22.2. Not publishing old versions is sadly not a solution for us as we also need to support re-execution of tests with old release of our final software/hardware package that would then require (pinned) versions of our tools that were released/versioned together with that package at the time.

Today I also observed pip happily downgrading an installed package to resolve a dependency conflict, that would likely also be a good idea to be able to disable with a dedicated flag.

@kariharju
Copy link

Personally, I do not see how opening that hard-coded value to be configurable would cause problems / need to add guidance.

Fine-tuning and tweaking the hard-coded default value of try_to_avoid_resolution_too_deep is always somewhat of a breaking change, so I would not touch the default value (although dropping a zero might make sense).

Using time alone to determine if resolution worked or not is not a viable option, because we are running these things on all sorts of machines, cross-platform, VMs containers so relying on time is just asking for trouble, this is why the backtracking amount is a lot better solution.

--no-backtracking-in-resolve would resolve one use case but not all.. still that get's my vote 😉.

Having the ability to set the try_to_avoid_resolution_too_deep is all that would be needed.. and find the reason why it seems to crash when hitting the limit instead of exiting gracefully.

We for example know that one of our locked-down / frozen dependencies consists of 170 sub-dependencies, so we could easily set the amount of allowed backtracking to be some percentage of the value we know, which would enable us to get some minor resolution fixes, but also avoid the 4 hour worst-case scenarios.. this has huge affects in CI/CD and end-user cases where the user is definitely NOT the one how know what to do.

What problems would come from reading the value for try_to_avoid_resolution_too_deep from an environment variable (if exists and use default if not)? What am I missing?

@RonnyPfannschmidt
Copy link
Contributor

If the backtracking takes longer than a few minutes typically something is wrong,

I'd consider it a good idea to cap resolution times /depth by default so people can opt in when the edge cases hit

For most usages i believe resolution time is less than a minute, and for people with tricky sets, they should be guided into giving pip hints instead of hoping for the best

@notatallshaw
Copy link
Member

notatallshaw commented Oct 21, 2022

@vjmp I am not part of pip but I would say that if you are distributing your installation via conda dependency files it would be significantly easier to keep everything inside conda. Either fix the rpaframework feedstock on conda-forge or create your own feedstock and publish it to your own conda channel and then there is no dependency on pip at all. This is a double win because there is no guarantee that conda and pip will install mutually compatible packages.

However if you are only using conda to bootstrap a Python environment I would suggest you also distribute a constraints file to your customers that represents a pinned application install (if you need an example look at Apache Airflow, they install a full environment using every possible requirement, then freeze the environment and use that as their constraints file). Then tell your customers to conda install the bootstrap the Python environment then activate the environment then pip install the Python requirements with the constraints file. With a well specified constraints file pip will not engage in any backtracking.

@pfmoore
Copy link
Member

pfmoore commented Oct 21, 2022

No that is not the "official pypa/pip recommendation", and frankly you're coming across a bit passive aggressive here. I was responding to @kariharju and in particular to their "what is the blocker / problem" question. I never once suggested that the solution to your problem was to fork pip and distribute that fork.

In my very first response here, I offered some suggestions on now you can address this issue for your system, without any changes to pip. In contrast, reducing the number of resolution rounds wouldn't fix anything, it would just make pip fail faster, with less useful diagnostic output.

There have been positive comments from both @pradyunsg and myself on this issue, just not to the simplistic "let people tweak the value" idea. But someone still needs to take one of the options we've said is potentially acceptable, create a PR for it, and handle any follow-up discussions and/or concerns. Which will involve reviewing the issues I linked, and explaining how the proposed solution relates to them. You should not assume that one of the pip maintainers will simply implement this for you - we're all volunteers and we have many other commitments, so waiting for one of us to have enough spare time to pick this up (as well as having the interest to work on this rather than another issue) is likely to be a rather long wait.

I'm going to avoid commenting further here. I think I've said all I can usefully add, and the responses are getting too heated so I think it's worth everyone cooling off a bit.

@vjmp
Copy link
Author

vjmp commented Oct 21, 2022

@pfmoore I'm sorry if I sound passive agressive, english is not my native language. I'm Finn.

It was very unclear to me, what "your copy of pip" means, because we are not looking "works on my machine" solutions, and I took it as "fork pip" as Open Source way recommendation. Thank you clearing that out and stating that it is not intention.

I also took statement "Nobody willing to do the work to get consensus on a solution and then implement it." litterally, meaning that there will be no solution for this from pypa/pip side. And that we are on our own.

@pfmoore Thank you and others for your time and patience. And my deepest apologies if I have offended you or others.

@vjmp
Copy link
Author

vjmp commented Oct 21, 2022

@notatallshaw It is other way round, our customers write RPA robots, and provide what ever mix of dependencies to our orchestration to organize as environment where those robots can be run. We provide "templated" start for customers to build on, but they decide what versions they want to use. And this is all working fine on "happy path" cases. Example (repo) in top of this issue, is actually "a robot", and there are examples given how to see success and failures.

Problem is that when there is "not so happy path" and pip goes backtracking mode, 100+ dependencies cause environment resolution take long time to fail. And "time" here is "relative concept", since machine and network performances change from customer to customer and from location to location. That is why fixed wallclock timeouts are not good solutions, and people are bad for understanding what time means for particular computer [performance]. Some backtracking might be ok, but if it would be configurable (as number of cycles), then it would be consistent independent of machine or network speeds (when set of dependencies remain same). Worst of all cases would be, that on fast machine with fast internet, resolver finishes successfully, but slower machine it would "timeout", which makes things not-repeatable, and not-consistent, and not-debuggable. That would make it "bad customer experience".

Yes, we recommend customers to use conda-forge dependencies where available, but getting all RPA framework dependencies to to "conda-forge" is quite "Mission Impossible".

And yes, I know, there is never right solution, only solutions that are selected and implemented.

@pfmoore
Copy link
Member

pfmoore commented Oct 21, 2022

And my deepest apologies if I have offended you or others.

No problem. I probably over-reacted. Apart from anything else, it didn't occur to me that you might not be a native speaker - that's entirely on me and I apologise.

I also took statement "Nobody willing to do the work to get consensus on a solution and then implement it." litterally, meaning that there will be no solution for this from pypa/pip side.

Unfortunately, that probably is accurate. The pip maintainers team consists of maybe 4 or 5 people1, all of whom are unpaid volunteers. And most of our time is spent doing support, project management, and trying to pay down technical debt 🙁 So realistically, a lot of the work on implementing new features falls on community members who have an interest, and/or need those features. And when I said "nobody is willing", I specifically meant that no such community member has yet stepped up to work on this issue (and as you can see from the links I quoted, it's far from being a new problem).

There's no easy solution here, and believe me, I'm probably as frustrated as you are with the current situation (there are a bunch of features I'd like to work on, but I simply don't have the time). I hope this makes the situation clearer, though.

Footnotes

  1. It's a bit variable, we're not all active all the time.

@pradyunsg
Copy link
Member

I'm writing this on a car ride to the airport, so apologies for the blunt responses and terseness.

No one is asking anyone to fork pip. Paul was referring to his script from #11480 (comment), which I pointed out as having significant caveats in #11480 (comment).

There are at least three comments with concrete suggestions, that don't involve exposing a confusing option to users that's difficult to reason about:

Anyone is welcome to drive one of these forward. They're all reasonable improvements that multiple maintainers have spoken in favour of.

there will be no solution for this from pypa/pip side

Not right now; this is a gap (that's why we've got a known limitation label on this).


I'd like us to push this discussion in a direction where we actually make improvements, instead of talking past each other.

So, here's a concrete and somewhat aggressive position that I'm gonna take: controlling resolver "rounds" is not something we're going to expose to users. If there's going to be any more discussion about exposing that to end users, I'm going to close this issue out and open separate issues for the specific suggestions I've linked to above.

@notatallshaw
Copy link
Member

notatallshaw commented Oct 24, 2022

@notatallshaw It is other way round, our customers write RPA robots

If I was in your situation I would still try providing a constraints file to your customers to say "this is what we tested our application against" and "when you are doing pip install please use these constraints", much in the same way Apache Airflow recommends: https://airflow.apache.org/docs/apache-airflow/stable/installation/installing-from-pypi.html

From my point of view this provides a few benefits:

  • It means your customers are using the same transitive dependencies as you tested with
  • If customers add mutually incompatible requirements it is going to appear much quicker and they can bring it up with you much faster
  • When a customer requires a dependency that explicitly breaks your constraints they are going to understand much faster that they are walking in to problems of complex requirement dependencies because they will have to take an action, either manually editing the constraints file or not using it at all

I may be over simplifying your problem, but I've been very happy using well specified constraints files as a tool to make multiple environments with slightly different requirements work well with each other.

@kariharju
Copy link

Yeah apologies on my behalf as well, Friday evening is not the best time to comment on these kinds of things ✋
The last answers got me the answer I was looking so this is about support and maintenance where every extra thing is a big thing so I totally understand the point now.

In this backdrop @pradyunsg three option scoping make a lot more sense and I think the third option of getting predictable outputs that applications can parse from the pip output would probably solve the situation and enable us to determine when enough back-tracking has happened.

I'll open our case a bit more to answer @notatallshaw's question:
In our case we are providing tools for users that are creating RPA robots (with python and RobotFramework) that are not just about python. We need more and we have already solved some big puzzles to enable getting other apps like browser, AWS tools, Terraform, ... from conda-forge and also enabling pip loading into isolated environments that can also be moved from one machine to another. Here relying on things that are installed on the end-users machine is a big no-no as it just leads to huge IT costs. Our tooling can make a freeze file that goes over conda-forge, pip and some extra post install scripts to achieve a fully locked environment even for environments that things like nodejs.

The problem is that we need to be able to built that environment somewhere and also provide tooling for the devs / IT people how are managing the dependencies.. There a CI could be burning a lot of time and energy running back-tracking without any human interaction. So predictable behavior of the base tools is everything to us.

I'll check if we could pitch in and try to help out with the notifications.
(..and again sorry for the flaming)

@pradyunsg pradyunsg changed the title After 2000000 unconfigurable resolution rounds later, pip fails (correctly) to come up with resolution. This takes 4+ hours, which is too long. Improving failures after 2000000 unconfigurable resolution rounds Oct 24, 2022
@potiuk
Copy link
Contributor

potiuk commented Oct 24, 2022

I may be over simplifying your problem, but I've been very happy using well specified constraints files as a tool to make multiple environments with slightly different requirements work well with each other.

Very much this. As the one who was behind designing and implemented the way how we use constraint mechanism in Airflow, your description is spot-on.

Also @kariharju - what you seem to need is just to build a reference Docker image automtically in your CI. Some parts of it that have been discussed in #11527

We are doing very much this in Apache Airflow - we have a CI that makes sure to build the right Docker container with all the tools and requirements that are needed to run any tests (and sepearate one for Production) and we are publishing it in Github Registry and built tooling about it so that developers can run any tests and users can use the imges (and Dockerfiles to build their own customized images) in their production. And constraints are crucial part of our pipelines to build the images. You can also see more about the solution we have in my talk https://www.youtube.com/watch?v=_SjMdQLP30s&t=2549s

See https://github.com/apache/airflow/blob/main/CI.rst for CI details and https://airflow.apache.org/docs/docker-stack/build.html for production image.

@kariharju
Copy link

Fully off-topic this one:
Docker does not solve our cases 😉
We cannot assume an office worker can (or is allowed to) setup docker on their machines (+it is too slow) also the IT departments really do not want to touch Windows + Docker.. and also the RPA bot need to interact with the users system so Docker style isolation is just out of the picture.

@aigarius
Copy link

We do have Docker for dependencies (pinned) of our internal projects, but for the actual internal projects we are now at the point where we distribute them on tarballs and then install those tarballs into the Docker images with a fake pypi index URL set to prevent pip from even trying to install anything else than what is given to it. Oh and "--no-build-isolation" too. IF there is a dependency conflict, it needs to fail and get developers to fix the dependencies rather then try downloading semi-random versions of packages from the network to try to resolve them.

Like the last issue was really fun:

  • we have a Docker container prepared with all needed dependencies, latest viable versions and all
  • in the entrypoint the container installs internal project-specific packages (one by one!)
  • pip happily installs correct provided package A
  • the provided package B is of a slightly outdated versions (by mistake) and its dependencies conflict with dependencies of package A, so pip goes off to our internal package repo and finds a version of package A does does not conflict ... one from two years ago, when those version dependencies simply were not strictly defined in the package yet, pip happily installs that one
  • developers spend days trying to figure out why the system is working, but very incorrectly

There is a space for "try very hard to resolve all issues and get to something that works" and there is a space for "if the resolution is not trivial, then something has gone very, very wrong and any further guesses will only make it harder to debug". And I do not envy pip developers trying to thread that needle. :D

@notatallshaw
Copy link
Member

notatallshaw commented Oct 24, 2022

I'll open our case a bit more to answer @notatallshaw's question: In our case we are providing tools for users that are creating RPA robots (with python and RobotFramework) that are not just about python. We need more and we have already solved some big puzzles to enable getting other apps like browser, AWS tools, Terraform, ... from conda-forge and also enabling pip loading into isolated environments that can also be moved from one machine to another. Here relying on things that are installed on the end-users machine is a big no-no as it just leads to huge IT costs. Our tooling can make a freeze file that goes over conda-forge, pip and some extra post install scripts to achieve a fully locked environment even for environments that things like nodejs.

The problem is that we need to be able to built that environment somewhere and also provide tooling for the devs / IT people how are managing the dependencies.. There a CI could be burning a lot of time and energy running back-tracking without any human interaction. So predictable behavior of the base tools is everything to us.

This kind of set up is not unfamiliar to me and in my experience you have to make a hard choice about what tool it is that creates your Python package environment, and what you're willing to support from your customers.

You can use conda to bootstrap your Python and it's binary dependencies, but IMO you need to decide if you are using conda or pip (or poetry etc ) to specify your Python packages. Otherwise you will forever run in to dependency problems like this between yourself and your customers.

If you are using pip then a method available to help this situation is constraints. If you are using other tools then they offer their own methods. If you are mixing tools then it's up to you to figure out how they behave together and probably develop your own specific processes to reduce collision and other problems (and I have had to do that several times and that's what led me to this conclusion).

But this is just from my experience, maybe someone else has a better solution.

@potiuk
Copy link
Contributor

potiuk commented Oct 27, 2022

If you are using pip then a method available to help this situation is constraints.

Agree. I find constraints really, really well working and super powerful the way they are implemented. None of the other package managers have anything equivalent (I was even one of the few people who try to convince Poetry to implement them python-poetry/poetry#3225 - but there seem to be no real interest (which led us to officially discourage people to use Poetry for Airflow and reject their dependency issues redirecting them to "here is the only way you can install Airflow and be sure it works and it's pip + constraints" ).

The only drawback for it is that you somehow have to manage and publish the constraints in a public URL with version of your software as part of the URL (this is one other great feature of constraints that you can use http:// url for it).

@potiuk
Copy link
Contributor

potiuk commented Oct 27, 2022

BTW. @pfmoore @pradyunsg -> I thought about maybe a proposal to Pypa to host constraints in PyPI as optional metadata (linked to package but updateable unlike the package). That would be a way I could see this could become a "standard". For example you could apply constraints automatically if they are present and you specify --with-default-constraints option of pip. That would make it super easy for people to build a solution based on constraints and it could be turned into a PEP /standard.

WDYT? Would tha fly (of course this is just very, very, very rough idea, it would require a LOT more discussion and consensus - just wanted to hear your comments, whether it's one of: nonsense, maybe, plausible, great idea to explore ... :)

@notatallshaw
Copy link
Member

notatallshaw commented Mar 28, 2023

This issue is effectively resolved on Pip main (28239f9) likely thanks to sarugaku/resolvelib#111 and sarugaku/resolvelib#113.

The issue is still reproducible on Python 3.9 Pip 23.0.1 with the following command (takes a very long time with no apparent resolution):

pip download -d downloads -r https://raw.githubusercontent.com/vjmp/pipbacktracking/trunk/requirements_fail.txt

However on Pip main (28239f9) the following output is quickly given:

ERROR: Cannot install pywebview[qt]==3.6.2 and rpaframework-dialogs because these package versions have conflicting dependencies.

The conflict is caused by:
    pywebview[qt] 3.6.2 depends on pywebview 3.6.2 (from https://files.pythonhosted.org/packages/23/b6/e1da5ff929ea0eedae11b53f4273583dccb7a6de37b68388201db43eeafc/pywebview-3.6.2-py3-none-any.whl (from https://pypi.org/simple/pywebview/))
    robocorp-dialog 0.5.3 depends on pywebview==3.6.3; sys_platform == "linux"

To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict

ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts

I suggest this issue be closed and if there is some different set of requirements that causes long backtracking a new issue should be opened (I would actually be surprised given how powerful both these improvements are).

@pradyunsg
Copy link
Member

Actually, I want to bring down the number for max-rounds, since we’re going to have a more efficient resolve and failing quicker is a good idea IMO.

@uranusjr
Copy link
Member

+1 to looking into how low we can get the max round number to.

@pradyunsg
Copy link
Member

pradyunsg commented Mar 28, 2023

Here's some numbers for us to consider based on back-of-the-napkin math...

  • 2_000_000 (current) -> ~4 hrs (250 mins)
  • 200_000 -> ~30 minutes (25 minutes)
  • 100_000 -> ~10 minutes (12 minutes)

I think there's a significant experience difference in 100_000 vs 2_000_000. The former is clearly better but it will be won't be able to handle really complex graphs. OTOH, we're gonna have fewer rounds thanks to backjumping. I'm comfortable with the middle ground but I'm happy to be aggressive and go down to 100_000 here too.


FWIW, this issue is about improving the messaging so that it's more actionable rather than a blunt "IDK what to do" message. That's a change that'll need to happen on both resolvelib's end and our end IMO -- and backjumping makes it harder to hit this case; doesn't eliminate it.

@pradyunsg pradyunsg added the C: error messages Improving error messages label Jan 27, 2024
@pradyunsg pradyunsg changed the title Improving failures after 2000000 unconfigurable resolution rounds Improving failures at ResolutionTooDeep to include more context Jan 27, 2024
@notatallshaw
Copy link
Member

notatallshaw commented Jan 27, 2024

This issue is effectively resolved on Pip main (28239f9) likely thanks to sarugaku/resolvelib#111 and sarugaku/resolvelib#113.

When I wrote this all known instances of ResolutionTooDeep were solved. Since then a few more have popped up, e.g. #12489, #12430, and #12395.

I have been working a new PR #12459 (which I describe my reasoning here #12318). It has successfully resolved every new ResolutionTooDeep issue that I have been able to reproduce, and in general for any complex backtracking situation I see at least some speed up.

@pradyunsg
Copy link
Member

Those are excellent improvements, but I think they're logically separate from what this issue is about.

At this point, the thing that this issue is tracking IMO is improving the information presented to the user when ResolutionTooDeep is raised to help them act on it. Currently, we're throwing away all the state and metadata available to the resolver without presenting any information to the user. It would be useful to present such information to the user and enable them to do something to trim the large space that the dependency resolution has tried to explore.

@kailando
Copy link

kailando commented Jan 27, 2024 via email

@notatallshaw
Copy link
Member

notatallshaw commented Jan 27, 2024

At this point, the thing that this issue is tracking IMO is improving the information presented to the user when ResolutionTooDeep is raised to help them act on it. Currently, we're throwing away all the state and metadata available to the resolver without presenting any information to the user. It would be useful to present such information to the user and enable them to do something to trim the large space that the dependency resolution has tried to explore.

I agree that would be nice if it was useful, I'm just skeptical how likely it is too be useful.

At ResolutionTooDeep that metadata is going to contain tens of thousands of packages, and the DFS style algorithm that resolvelib implements doesn't elucidate in the metadata which of those packages were problematic (in fact that information is mostly lost by the time you get to ResolutionTooDeep).

IMO I think it would be more useful for resolvelib to have at least one more reporter event at the time of discarding candidates because of backtracking, Pip can log that metadata, and a diligent user could inspect the logs and find the common pattern of problamatic packages and/or requirements. Or, just solve backtracking performance enought to never have a real-world case of ResolutionTooDeep, and this becomes a non-issue.

All that said, I would very enthusiastically help test any PR that someone wants to submit that provides a useful message out of ResolutionTooDeep.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: dependency resolution About choosing which dependencies to install C: error messages Improving error messages resolution: known limitation Behaviour is not ideal, but it's a well-known issue that we cannot resolve type: performance Commands take too long to run
Projects
None yet
Development

No branches or pull requests

10 participants