Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Predictive interlock resolution #122

Open
timholy opened this issue Apr 29, 2019 · 17 comments
Open

Predictive interlock resolution #122

timholy opened this issue Apr 29, 2019 · 17 comments

Comments

@timholy
Copy link
Member

timholy commented Apr 29, 2019

Not sure where this should be filed, but let's start here. I understand the reasons for the drive to move towards greater use of upper bounds on package versions (perhaps set by automatic tools), because no one can predict what a future version of a package will do. But for heavily-interdependent ecosystems like JuliaImages, this can make it nearly impossible for beginning and intermediate users who bump versions in PRs in anticipation of a release even to get their commits to pass on CI. For reference, a devoted GSoC applicant has been working for more than a week just to try to get a set of packages released under the new tagging system, with basically no other changes being made. This is good thing to do, but the fact that it has taken so much effort is a waste of good developer resources.

As usual, the answer may be better or more tooling. I suspect we need a set of tools that allow:

  • a developer to ask "If I tag ImageCore 0.8, what packages will be incompatible with it?"
  • create a new branch of the relevant registry(ies), relax the apparent conflicts, and iteratively test all the packages with the relaxed bounds. If any of them fail, restore the bounds on just those packages.

Until we have that kind of tooling, I'd be very cautious about being aggressive about imposing upper bounds.

@fredrikekre
Copy link
Member

I think that the Image* packages got caught with upper bounds left in from the conversion script, where we updated them on each commit. See e.g. https://github.com/JuliaRegistries/General/pull/390/files where no upper bound was placed on ImageCore.

@timholy
Copy link
Member Author

timholy commented Apr 29, 2019

Now I understand that, though it took me longer than it should have to realize where it had come from. I wasn't carefully following discourse posts and the like, so I didn't even realize initially that "aggressive" bounding had happened.

But even if there aren't any plans for such automated bounding in the future, I think this issue still deserves attention. In theory, upper bounds make sense, and I'd like to use them. I've experimented with that in the JuliaInterpreter->LoweredCodeUtils->Revise->Rebugger chain; it's a sufficiently simple and short dependency chain that I think the small annoyances are worth the peace of mind one gets from reduced likelihood of unforeseen incompatibilities. Conversely, with something like JuliaImages, the complexity of the dependencies means that in practice I'll avoid upper bounds for anything other than Julia itself, unless we develop tools that help sort out the problems such bounds create.

@tpapp
Copy link
Contributor

tpapp commented Apr 29, 2019

@fredrikekre, I also noticed that you comment Registry PRs with suggestions for bounds, and in cases block registration for this.

While I understand that in a mature ecosystem version bounding can be very useful, not all packages are at the state of maturity this should matter, and I am not sure this strikes the best balance at this point. Could we wait for the state of the registry, and the version resolution algorithm and interface (ie getting an explanation why something is happening) to mature a bit?

@fredrikekre
Copy link
Member

@fredrikekre, I also noticed that you comment Registry PRs with suggestions for bounds, and in cases block registration for this.

I only skip merging in case of 0 constraints though, and only "require" that users add constraints for julia. If we want to turn this ship around we need to starts somewhere.

@timholy
Copy link
Member Author

timholy commented Apr 29, 2019

I agree we want bounds in the long run. As with gen_project, I'd say "develop the tools first, then start imposing locks." If nothing else, recent experience has increased clarity about what tools we might need in order to get there.

@tpapp
Copy link
Contributor

tpapp commented Apr 29, 2019

@fredrikekre: I understand the goal, I just probably missed when that was announced as a requirement. In any case, I added a constraint for Julia to the skeleton.jl template so that users now have something to start with.

@fredrikekre
Copy link
Member

@fredrikekre: I understand the goal, I just probably missed when that was announced as a requirement.

CI tests for METADATA had a requirement for bounds on Julia.

@StefanKarpinski
Copy link
Contributor

There's also a resolver bug that seems to only trigger when there are no constraints at all, so this is less of a force requirement and more of a "you may want to do this or there will be issues" friendly suggestion.

For reference, a devoted GSoC applicant has been working for more than a week just to try to get a set of packages released under the new tagging system, with basically no other changes being made. This is a waste of good developer resources.

If there are issues like this, please bring them to my attention—I can't know that there's something to fix or that someone is struggling unless I'm informed about it. There were some bugs in the initial week of the new registrator that caused a lot of problems. They have hopefully mostly been fixed now, by manually making fix commits like this one.

@StefanKarpinski
Copy link
Contributor

Until we have that kind of tooling, I'd be very cautious about being aggressive about imposing upper bounds.

People keep talking about upper bounds being imposed, but that's not what happens. Whatever bounds a package claims are respected and taken at face value when it comes to existing versions of dependencies. However, when a package claims to be compatible with a version of another package that does not yet exist, that is not taken at face value. Instead, that claim is "watered down" to the most plausible claim that is compatible with the way semantic versioning works. The first week of registrator used a different algorithm to determine these version ranges which did not respect semantic versioning, which was a problem, but that has been fixed for about a week.

@timholy
Copy link
Member Author

timholy commented Apr 29, 2019

I'm sure we just got trapped in the interregnum. And I was very slow to realize the source of the trouble, and trying to help fix the situation myself spent more hours on it than I should have and got quite grumpy. (Sorry. I am behind on so many work deadlines and this reared up at a bad time.) But in retrospect I understand why this happened and even think it might have been a good thing, long term, in moving towards a sustainable bounding system.

Doing a bit of self-reflection, I think one of the reasons why I was slow to realize the source of the trouble was that, as someone who has not dived deep into Pkg internals, I kept assuming that the registry was essentially a historical mirror of Project.toml and REQUIRE. It's quite confusing to look at a Project.toml and see no constraints, yet not be allowed to install a package.

However, when a package claims to be compatible with a version of another package that does not yet exist, that is not taken at face value. Instead, that claim is "watered down" to the most plausible claim that is compatible with the way semantic versioning works.

Semantic versioning aside, this is actually what I'm talking about. Semver is a very narrow communication channel that cannot express the full range of nuances; for example, if a new release of package X breaks backwards compatibility with package Y but not package Z, I might release it as X2.0.0 because it is a breaking change. Modifying Y and registering a new X2.0.0-compatible version seems completely rational. What seems less rational is the fact that I also need to re-register Z after the release of X, because X2.0.0 did not exist at the time Z was most recently registered. After all, as a conscientious developer, before tagging X2.0.0 I had locally tested Z against the breaking PR in X, and everything was fine, so why was there anything to worry about? And I had even checked Z's Project.toml file, and there was no constraint, so I was confident everything was going to be OK. But the registry, which I'm not accustomed to inspecting, gets in my way.

Of course in a case involving just 3 packages this is not a big deal. But in a complex corner of the package ecosystem, it's possible to get in a situation where there might be a dozen (or more) packages that depend on X, but a change in X breaks only one of them. Now you have a bit of a scaling problem if you have to manually figure out what needs to be re-registered.

@StefanKarpinski
Copy link
Contributor

What seems less rational is the fact that I also need to re-register Z after the release of X, because X2.0.0 did not exist at the time Z was most recently registered.

To be clear: registering a new version isn't necessary, but changing the version bounds on existing versions is. We didn't have to do this much in the past because the default has been to live YOLO-style and assume every version of everything is compatible with every version of everything else. That is nice from the developer perspective (don't do anything, get free upgrades!) but isn't without its problems: e.g. whenever someone does put an upper bound on something, all hell breaks loose because now older versions which claimed that they were compatible with all possible future versions will get chosen by the resolver when the newer versions have sane and correct version bounds.

What we're trying to do here is get to a place where:

  1. We take claims of compatibility with dependencies at face value at registration time—what's in the project file matches what goes into the registry.
  2. There aren't lots of ticking time-bomb wild, implausible claims of compatibility with every future version of dependencies all over the place.

We used to succeed at 1 but fail at 2. This caused major problems when a recent package version would get correctly capped because older versions would then pop up and say "Pick meeee! I'm compatible with everything!" Which of course wasn't true, and end up a) breaking things immediately and b) forcing lots of manual capping of old versions by someone maintaining METADATA.

Currently, we're failing at 1 but succeeding at 2. How did that transition happen? The sync script that converted METADATA to the new General registry would look at claimed compatibility and figure out a version range of actual versions of dependencies which were compatible. The way it did that was optimized for human readability, not semver compatibility, which is what caused some of the initial problems. That was fixed about a week after Registrator launched to be based on semver.

What is happening currently is that compatibility ranges in the registry are determined by a combination of what's claimed in a new version's project file [compat] section and what versions of dependencies exist in the registry:

  • What's claimed in the project file is used to filter all dependency versions in the registry and then semver-compatible ranges are computed from the sets of compatible and incompatible existing versions of each dependency.

  • As a compromise because people were so very upset about having "caps forced on them" this process considers 0.x versions to be compatible with each other.

We want to get back to taking claims of compatibility at face value (recover property 1). We could just start doing this: whatever's in the project file goes into the registry. But then we're re-introducing time-bombs that we currently don't have. We'll have to fix those at some point and it seem better to filter them as we go and deal with that than to try to fix them later.

@timholy
Copy link
Member Author

timholy commented Apr 29, 2019

I agree that prima facie the lack of upper bounds looks like a complete minefield, and it should be fixed. But an insane system in the right hands (a responsive community that fixes problems quickly) can work reasonably well (not perfectly, but reasonably), and you definitely don't want to drive away those hands for the sake of hypotheticals.

My suggested approach is to get some tools along the lines of what I suggest in the OP. Make it easy to develop despite having strict upper bounds, and more people will be willing to add them to their Project.toml files.

If this gets prioritized and developed quickly, the existing bounds put in place by the script may survive at least partly intact. (We resolved the JuliaImages problems with PRs like JuliaRegistries/General#392, which IIUC what "x-0" means, just nukes the upper bounds. I could go back and try to fix that after we have tools to help resolve these issues.) The other approach is to gradually increase the demands of the tests on PRs to the General registry, at some point forcing people to declare upper bounds. But there would still be the issue of repairing history, as well as some complaining about raising standards.

@fredrikekre
Copy link
Member

As usual, the answer may be better or more tooling. I suspect we need a set of tools that allow:

  • a developer to ask "If I tag ImageCore 0.8, what packages will be incompatible with it?"
  • create a new branch of the relevant registry(ies), relax the apparent conflicts, and iteratively test all the packages with the relaxed bounds. If any of them fail, restore the bounds on just those packages.

Is modifying the registry the best idea here? Or is it better to tag patch releases of the dependencies?

@timholy
Copy link
Member Author

timholy commented Oct 31, 2019

I think since we don't support circular dependencies (right??), then either is viable. (If you never modified the registry itself, circular dependencies that had patch-level version bounds could force an infinite cycle of upgrades.) To me it seems a bit odd to release a version of a package with no changes compared to the previous one except the declared bounds, but I don't think there's anything particularly rational about that. Indeed it's seeming less strange than it once did.

@fredrikekre
Copy link
Member

To me it seems a bit odd to release a version of a package with no changes compared to the previous one except the declared bounds

I'm not sure I agree. Relaxing the bounds can be seen as a bugfix IMO, e.g. JuliaWeb/GitHub.jl#149. I kinda like that we don't modify the registry, since then the project file for that release does not match whats in the registry.

@timholy
Copy link
Member Author

timholy commented Oct 31, 2019

I'm 100% fine with that perspective.

@DilumAluthge
Copy link
Member

https://github.com/bcbi/CompatHelper.jl doesn’t solve all of these issues, but it’s a start.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants