-
-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create mousebender.resolve
#105
Comments
A couple of relevant sections from existing standards: |
The
So the important thing here seems to be having requirements that are for the same thing to line up for merging. Apparently everyone makes extras part of key; pip, PDM, and the extras example from resolvelib. What's interesting is PDM and pip return a string while the resolvelib example returns a string or tuple of name and extras. Since the typing of Now we just have to figure out the representation of a requirement in order to generate that tuple. 😅 |
The
Depending on how candidate objects look, this could be very easy or it could require going back to the wheel to get the dependencies. Since the resolver is the one having to figure out priorities, the dependencies probably don't have a priority order, thus returning something like a set should be okay. |
The
Now the interesting thing is "the candidate is guaranteed to have been generated from the requirement", and yet we still have to check if the candidate satisfies the requirement. 🤔 The best I can think of is multiple requirements got merged into a single requirement and this is to check if the candidate is still viable. |
The
The PyPI wheel example from resolvelib and an open issue about a default implementation suggest just going with the number of candidates as that prioritizes the most restricted distributions first. Pip's implementation is way more involved. From its docstring:
PDM's implementation is also more involved. The return value gives a hint as to what it prioritises: return (
not is_python,
not is_top,
not is_file_or_url,
not is_pinned,
not is_backtrack_cause,
dep_depth,
-constraints,
identifier,
) It might be best to go with the simple solution as suggested in the open issue and example and increase complexity as necessary. |
If This is seems to be where communicating with e.g. PyPI would come in (as the PyPI wheel example suggests). This does require returning things in a sorted order of preference. |
After all of those notes, it seems like the methods do the following (with a wheel-specific view):
It seems all the magic in terms of what to customize happens in It seems like requirements are pretty much what |
Same for the corresponding candidates -- for every candidate generated by a find_matches(requirement), it needs to have I don't remember if we added an explicit error check for this in resolvelib, but flagging this since I don't see this nuance mentioned here. PS: We need to improve that docstring for identify. 😅 |
How does that play into extras? The extras example says:
So does that mean you have the same candidate wheel under multiple identifiers to satisfy any and all extras that are possible for a distribution (and thus cache a lot to avoid the cost of looking up the same thing for a distribution just because you now come across a new set of extras)? Or do you add a dependency to the distribution+extra of the distribution itself and have a virtual candidate that always satisfies the distribution+extra so the resolver effectively ignores the distribution+extra node of the dependency graph? |
|
For pip, yes-ish (-ish because I don't recall the caching details). The extra candidate is based on another candidate and the way that this is handled is by having the extra candidate depend on a requirement that only For something like the following... [project]
name = "home"
version = "1.0.0"
[project.optional-dependencies]
base = ["base-dependency"]
dog = ["dog-food", "dog-bed"]
cat = ["cat-food", "cat-bed"] And doing a graph LR
subgraph legend
requirement(["requirement"])
candidate[["candidate"]]
end
subgraph cat-extra["identity = home[base,cat]"]
home-cat-requirement(["home[base,cat]"])
home-cat-version-one[["ExtrasCandidate(\n LinkCandidate(home-1.0.0-*.whl),\n [base,cat]\n)"]]
style home-cat-version-one text-align:left
end
subgraph dog-extra["identity = home[base,dog]"]
home-dog-requirement(["home[base,dog]"])
home-dog-version-one[["ExtrasCandidate(\n LinkCandidate(home-1.0.0-*.whl),\n [base,dog]\n)"]]
style home-dog-version-one text-align:left
end
subgraph cat-requirements ["layout grouping\ncat extra dependencies"]
cat-requirement-food(["cat-food"])
cat-requirement-bed(["cat-bed"])
end
subgraph base-requirements ["layout grouping\nbase extra dependencies"]
base-requirement(["base-dependency"])
end
subgraph home-one ["identity = home"]
home-requirement-one(["ExplicitRequirement(LinkCandidate(home-1.0.0-*.whl))"])
home-version-one[["LinkCandidate(home-1.0.0-*.whl)"]]
end
subgraph dog-requirements ["layout grouping\ndog extra dependencies"]
dog-requirement-food(["dog-food"])
dog-requirement-bed(["dog-bed"])
end
home-dog-requirement --> home-dog-version-one;
home-cat-requirement --> home-cat-version-one;
home-dog-version-one --> home-requirement-one;
home-dog-version-one --> base-requirement;
home-cat-version-one --> home-requirement-one;
home-cat-version-one --> base-requirement;
home-requirement-one --> home-version-one;
home-cat-version-one --> cat-requirement-food;
home-cat-version-one --> cat-requirement-bed;
home-dog-version-one --> dog-requirement-food;
home-dog-version-one --> dog-requirement-bed;
And, yes, I've not expanded the requirements that came from the extras into their corresponding candidates because that isn't relevant to the extra handling question. (A home 2.0.0 will add a new node corresponding to those for 1.0.0, with basically the same connections if the dependencies are unchanged. Adding that in made mermaid render the graph in an unreadable format so I've skipped it) Edit: Updated to explicitly reference the pip-side class object names. |
@pradyunsg thanks for this! So if I understand this correctly, there are pseudo-candidates per extras set and distro version which ends up with an implicit requirement to the distribution itself -- i.e. no extras -- with a |
Yes, except that it's not done via >>> SpecifierSet("==1.0").contains("1.0+local_version_label")
True Where the I'm realising that this can (and does) lead to the resolver backtracking on the wrong requirement sometimes, since backtracking when an ExplicitRequirement is involved is going to result in suboptimal backtracking. The fix there is to prefer to backtrack on |
That sounds contradictory. If backtracking when an explicit requirement is involved is suboptimal, why is the fix to prefer backtracking in that case? Are you saying explicit requirements should come earlier or later in the ordering provided by |
And now I'm seeing why sarugaku/resolvelib#14 exists. 😅 |
Yea -- the preference for backtracking on the |
Started outlining the code in the 'resolve' branch. |
The resolver now works! I haven't done constraints, but it works well enough that I have a PoC resolver that works with PyPI metadata that can be directly downloaded. If anyone has feedback on the high-level API I would appreciate it since that's the bit that can't be easily changed later. |
Do you mind to briefly explain how the PoC differs from the resolver in pip? |
At minimum, the blatant differences are:
There's probably other subtleties that I'm not aware of. |
Use resolvelib to keep compatibility with other projects like pip and PDM.
Provide flexibility in how the resolution is handled:
Provider protocol
identify()
get_preference()
find_matches()
is_satisfied_by()
get_dependencies()
The text was updated successfully, but these errors were encountered: