Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Risk of outdated Pipfile.lock after editing Pipfile. #9

Closed
defnull opened this issue Nov 21, 2016 · 15 comments
Closed

Risk of outdated Pipfile.lock after editing Pipfile. #9

defnull opened this issue Nov 21, 2016 · 15 comments

Comments

@defnull
Copy link

defnull commented Nov 21, 2016

In addition #8, this is perhaps the most serious usability problem with the current approach: For a developer it is very easy to forget about Pipfile.lock after editing Pipfile, especially because he is not supposed to care about the lock file.

This is not an issue with pure approaches (setup.py or requirements.txt) because there is only one version of truth. Pipfile introduces an intermediate step that translates one truth into another, and different tools look at different files. This just asks for trouble. I can already see the broken builds and "fix: Forgot to build Pipfile" commits waiting to happen.

A very common workflow is to install a dependency manually, check if it works, and then add it to requirements.txt or just call pip freeze again. With Pidfiles, there is no pip freeze (see #8) and you have to invoke a command-line utility after editing the Pidfile to make things happen. Unfortunately, you just manually installed the dependency, so everything works anyway (on your computer) and there is no indication that your Pidfile.lock is out of sync if you forget this step. You are a single git commit -a away from a broken build.

Build or install tools (pip) cannot check if Pipfile and Pipfile.lock are out of sync without executing the Pipfile (which is bad, see #7) and cannot warn the user.

@dstufft
Copy link
Member

dstufft commented Nov 21, 2016

I think this would be pretty trivial to detect, just bake a hash of the Pipfile into Pipfile.lock.

@defnull
Copy link
Author

defnull commented Nov 21, 2016

Good idea, actually. I'll keep this issue open until this is implemented ;)

@dstufft
Copy link
Member

dstufft commented Nov 21, 2016

This of course runs the risk of getting an "outdated" false positive for non-semantic changes like adding a comment or something. If we get to a place that it's possible to safely parse the Pipfile then you can make that better by making it a hash on the data that was parsed out of the Pipfile rather than on the Pipfile itself. Same idea just more targeted to changes that actually matter.

@and800
Copy link

and800 commented Jan 8, 2017

Also it may be useful to store the hash of lock file itself, along with the hash of Pipfile data

@jayfk
Copy link

jayfk commented Feb 2, 2017

Having a hash in Pipfile.lock will likely make it painful to work with git (if committed).

Example 1:

  • I add flask to my Pipfile and create a Pipfile.lock (works)
  • I create a feature branch and add sqlalchemy (works)
  • I add a new dependency requests to master (works)
  • I merge my sqlalchemy branch (merge conflict, sha mismatch)

Example 2:

  • I'm working on a bigger migration from celery 3 to celery 4 in a feature branch.
  • In the meantime Django issues a new security release and I'm updating my Pipfile and Pipfile.lock
  • celery 4 is finally ready to be merged, I'm getting Pipfile.lock merge conflicts

Example 3:

  • I'm using automated tools to update my dependencies in their own branches, one at a time
  • I'm getting merge conflicts all over the place as soon as more than one update is available

The only way around that would be to tell people to never ever commit their Pipfile.locks, or to remove the hash.

@sethmlarson
Copy link

sethmlarson commented Feb 2, 2017

@jayfk Is resolving a one-line merge conflict really that big of a deal nowadays with GitHub's web interface for conflict resolution? Valid point though.

Maybe the hash could be external to the file and not committed? Pipfile.hash? I dunno.

@jayfk
Copy link

jayfk commented Feb 2, 2017

@SethMichaelLarson the problem is you need to recalculate the hash every time since both hashes are incorrect at the time you merge your feature branch.

@dstufft
Copy link
Member

dstufft commented Feb 2, 2017

@jayfk So it's possible to get around that by doing something like instead of a hash, baking the contents of Pipfile itself into the Pipfile.lock, in this way you'd end up with cleaner diff (although it doesn't solve the abstract problem of merge conflicts that can still happen and require a way to deal with them).

So I guess it comes down to whether taking up extra disk space by duplicating all of the contents of the Pipfile inside the Pipfile.lock and still needing a conflict resolution workflow sometimes is a better solution than using less disk space but pretty much always needing some sort of conflict resolution workflow.

@jayfk
Copy link

jayfk commented Feb 2, 2017

So it's possible to get around that by doing something like instead of a hash, baking the contents of Pipfile itself into the Pipfile.lock, in this way you'd end up with cleaner diff (although it doesn't solve the abstract problem of merge conflicts that can still happen and require a way to deal with them).

Using this approach, the main benefit is that a human is able to resolve the merge conflict pretty easily.

Using a hash requires you to resolve the merge conflict locally by regenerating the Pipfile.lock through the CLI.

@and800
Copy link

and800 commented Feb 2, 2017

Using this approach, the main benefit is that a human is able to resolve the merge conflict pretty easily.

Using a hash requires you to resolve the merge conflict locally by regenerating the Pipfile.lock through the CLI.

IMO despite fixing conflicts in Pipfile.lock manually is fast and available from GitHub, this is quite error-prone approach. It is often much easier to resolve conflicts in main file and generate lockfile, than resolve much larger chunks of diff in lockfile.

The only way around that would be to tell people to never ever commit their Pipfile.locks, or to remove the hash.

Maybe the hash could be external to the file and not committed? Pipfile.hash? I dunno.

These files must be in vcs, otherwise they are useless, aren't they?

@merwok
Copy link

merwok commented Jun 28, 2017

I can already see the broken builds and "fix: Forgot to build Pipfile" commits waiting to happen.

Isn’t this sort of things caught by running tox before pushing?

@dedsm
Copy link

dedsm commented Jul 1, 2017

I don't see this as a problem at all, my workflow is similar to what I think Pipfile wants to achieve.
I have a set of unpinned requirements files and a fully pinned requirements.txt generated with a Makefile, the generated requirements.txt is the "production" version, if you modify any of the requirements files it's because you add something new or remove something, and that's when @merwok comment comes to light, you set up a tox environment that runs tests installing only your requirements.txt packages (and the ones needed for testing exclusively), so your tests will fail and you won't be able to commit.

@and800
Copy link

and800 commented Jul 2, 2017

  1. You don't always have 100% coverage, so, theoretically, these errors can pass unnoticed. But tox will always be able to detect hash mismatch, because package manager will fail
  2. If your project is quite large, you don't like to run a whole test set on your machine, instead you switch to builds per pull request or even nightly builds

@kennethreitz
Copy link
Contributor

Pipenv auto-locks after you modify the Pipfile.

@kennethreitz
Copy link
Contributor

This has been fine, in practice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants