-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Predictive interlock resolution #122
Comments
I think that the |
Now I understand that, though it took me longer than it should have to realize where it had come from. I wasn't carefully following discourse posts and the like, so I didn't even realize initially that "aggressive" bounding had happened. But even if there aren't any plans for such automated bounding in the future, I think this issue still deserves attention. In theory, upper bounds make sense, and I'd like to use them. I've experimented with that in the JuliaInterpreter->LoweredCodeUtils->Revise->Rebugger chain; it's a sufficiently simple and short dependency chain that I think the small annoyances are worth the peace of mind one gets from reduced likelihood of unforeseen incompatibilities. Conversely, with something like JuliaImages, the complexity of the dependencies means that in practice I'll avoid upper bounds for anything other than Julia itself, unless we develop tools that help sort out the problems such bounds create. |
@fredrikekre, I also noticed that you comment Registry PRs with suggestions for bounds, and in cases block registration for this. While I understand that in a mature ecosystem version bounding can be very useful, not all packages are at the state of maturity this should matter, and I am not sure this strikes the best balance at this point. Could we wait for the state of the registry, and the version resolution algorithm and interface (ie getting an explanation why something is happening) to mature a bit? |
I only skip merging in case of 0 constraints though, and only "require" that users add constraints for julia. If we want to turn this ship around we need to starts somewhere. |
I agree we want bounds in the long run. As with |
Following discussion in https://github.com/JuliaComputing/Registrator.jl/issues/122
@fredrikekre: I understand the goal, I just probably missed when that was announced as a requirement. In any case, I added a constraint for Julia to the skeleton.jl template so that users now have something to start with. |
CI tests for METADATA had a requirement for bounds on Julia. |
There's also a resolver bug that seems to only trigger when there are no constraints at all, so this is less of a force requirement and more of a "you may want to do this or there will be issues" friendly suggestion.
If there are issues like this, please bring them to my attention—I can't know that there's something to fix or that someone is struggling unless I'm informed about it. There were some bugs in the initial week of the new registrator that caused a lot of problems. They have hopefully mostly been fixed now, by manually making fix commits like this one. |
People keep talking about upper bounds being imposed, but that's not what happens. Whatever bounds a package claims are respected and taken at face value when it comes to existing versions of dependencies. However, when a package claims to be compatible with a version of another package that does not yet exist, that is not taken at face value. Instead, that claim is "watered down" to the most plausible claim that is compatible with the way semantic versioning works. The first week of registrator used a different algorithm to determine these version ranges which did not respect semantic versioning, which was a problem, but that has been fixed for about a week. |
I'm sure we just got trapped in the interregnum. And I was very slow to realize the source of the trouble, and trying to help fix the situation myself spent more hours on it than I should have and got quite grumpy. (Sorry. I am behind on so many work deadlines and this reared up at a bad time.) But in retrospect I understand why this happened and even think it might have been a good thing, long term, in moving towards a sustainable bounding system. Doing a bit of self-reflection, I think one of the reasons why I was slow to realize the source of the trouble was that, as someone who has not dived deep into Pkg internals, I kept assuming that the registry was essentially a historical mirror of Project.toml and REQUIRE. It's quite confusing to look at a Project.toml and see no constraints, yet not be allowed to install a package.
Semantic versioning aside, this is actually what I'm talking about. Semver is a very narrow communication channel that cannot express the full range of nuances; for example, if a new release of package X breaks backwards compatibility with package Y but not package Z, I might release it as X2.0.0 because it is a breaking change. Modifying Y and registering a new X2.0.0-compatible version seems completely rational. What seems less rational is the fact that I also need to re-register Z after the release of X, because X2.0.0 did not exist at the time Z was most recently registered. After all, as a conscientious developer, before tagging X2.0.0 I had locally tested Z against the breaking PR in X, and everything was fine, so why was there anything to worry about? And I had even checked Z's Project.toml file, and there was no constraint, so I was confident everything was going to be OK. But the registry, which I'm not accustomed to inspecting, gets in my way. Of course in a case involving just 3 packages this is not a big deal. But in a complex corner of the package ecosystem, it's possible to get in a situation where there might be a dozen (or more) packages that depend on X, but a change in X breaks only one of them. Now you have a bit of a scaling problem if you have to manually figure out what needs to be re-registered. |
To be clear: registering a new version isn't necessary, but changing the version bounds on existing versions is. We didn't have to do this much in the past because the default has been to live YOLO-style and assume every version of everything is compatible with every version of everything else. That is nice from the developer perspective (don't do anything, get free upgrades!) but isn't without its problems: e.g. whenever someone does put an upper bound on something, all hell breaks loose because now older versions which claimed that they were compatible with all possible future versions will get chosen by the resolver when the newer versions have sane and correct version bounds. What we're trying to do here is get to a place where:
We used to succeed at 1 but fail at 2. This caused major problems when a recent package version would get correctly capped because older versions would then pop up and say "Pick meeee! I'm compatible with everything!" Which of course wasn't true, and end up a) breaking things immediately and b) forcing lots of manual capping of old versions by someone maintaining METADATA. Currently, we're failing at 1 but succeeding at 2. How did that transition happen? The sync script that converted METADATA to the new General registry would look at claimed compatibility and figure out a version range of actual versions of dependencies which were compatible. The way it did that was optimized for human readability, not semver compatibility, which is what caused some of the initial problems. That was fixed about a week after Registrator launched to be based on semver. What is happening currently is that compatibility ranges in the registry are determined by a combination of what's claimed in a new version's project file
We want to get back to taking claims of compatibility at face value (recover property 1). We could just start doing this: whatever's in the project file goes into the registry. But then we're re-introducing time-bombs that we currently don't have. We'll have to fix those at some point and it seem better to filter them as we go and deal with that than to try to fix them later. |
I agree that prima facie the lack of upper bounds looks like a complete minefield, and it should be fixed. But an insane system in the right hands (a responsive community that fixes problems quickly) can work reasonably well (not perfectly, but reasonably), and you definitely don't want to drive away those hands for the sake of hypotheticals. My suggested approach is to get some tools along the lines of what I suggest in the OP. Make it easy to develop despite having strict upper bounds, and more people will be willing to add them to their Project.toml files. If this gets prioritized and developed quickly, the existing bounds put in place by the script may survive at least partly intact. (We resolved the JuliaImages problems with PRs like JuliaRegistries/General#392, which IIUC what |
Is modifying the registry the best idea here? Or is it better to tag patch releases of the dependencies? |
I think since we don't support circular dependencies (right??), then either is viable. (If you never modified the registry itself, circular dependencies that had patch-level version bounds could force an infinite cycle of upgrades.) To me it seems a bit odd to release a version of a package with no changes compared to the previous one except the declared bounds, but I don't think there's anything particularly rational about that. Indeed it's seeming less strange than it once did. |
I'm not sure I agree. Relaxing the bounds can be seen as a bugfix IMO, e.g. JuliaWeb/GitHub.jl#149. I kinda like that we don't modify the registry, since then the project file for that release does not match whats in the registry. |
I'm 100% fine with that perspective. |
https://github.com/bcbi/CompatHelper.jl doesn’t solve all of these issues, but it’s a start. |
Not sure where this should be filed, but let's start here. I understand the reasons for the drive to move towards greater use of upper bounds on package versions (perhaps set by automatic tools), because no one can predict what a future version of a package will do. But for heavily-interdependent ecosystems like JuliaImages, this can make it nearly impossible for beginning and intermediate users who bump versions in PRs in anticipation of a release even to get their commits to pass on CI. For reference, a devoted GSoC applicant has been working for more than a week just to try to get a set of packages released under the new tagging system, with basically no other changes being made. This is good thing to do, but the fact that it has taken so much effort is a waste of good developer resources.
As usual, the answer may be better or more tooling. I suspect we need a set of tools that allow:
Until we have that kind of tooling, I'd be very cautious about being aggressive about imposing upper bounds.
The text was updated successfully, but these errors were encountered: