-
Notifications
You must be signed in to change notification settings - Fork 704
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Add support for "#! /usr/bin/env cabal #3843
Comments
Quite plausible. Just needs a clear design. |
I'm not sure I see why it shouldn't just auto-install (nix-style) the dependencies. Surely that's basically the purpose of this feature in the first place... to Just Do It(TM)? EDIT: Same thing the auto-run bit. (I suppose it doesn't have to "compile" it, but could use the interpreter, but if we're talking small scripts then I don't suppose there's not particularly much to be gained by not compiling?) |
Btw, another thing to consider is that we don't want to have to rerun the expensive solver on each invocation when there is no new install-plan to be expected (e.g. when the |
I think there is one good argument against doing it automatically, which it can mask performance problems. In abstract, suppose for whatever reason you expect all the deps to have already been installed. If you run the script and it goes off and starts building things, a performance invariant has been violated. If the script doesn't error, its easy to not notice, and now you're paying (a nontrivial) perf tax to run your script. It's all about expectations. It's unpleasant to run stack and then suddenly a GHC binary starts downloading to your machine. Same thing here: you wanted to run a script, not start a multi-minute build process to get it going. |
@hvr, I milestoned this as bottom. When there's a full design OR someone has prototyped it, we can remilestone it. |
Re the perf expectations question, I think this is a case where a flag may make sense. Seems to me the default ought to be to build packages (but not download ghc), and have flags to modify both behaviours, ie assume that deps are built already so fail if not, or if in future we support installing big things like compilers then also doing that if needed. So just like we have |
When I start hacking a new program, I often start with a single module, and maybe lazily upgrade to a proper full
.cabal
+cabal.project
project when a single module becomes too inconvenient. Or not at all. Also, a single small.hs
file is easier to share/move around, or post on blogposts than a project requiring multiple files.One common issue with one-module
runghc
scripts is that we can't easily associate dependency information with them, as they implicitly reference the global/user-pkg dbs. But in the light ofnew-build
, it would be quite convenient to have one-module scripts carry a subset of build-info meta-data with them so that cabal could bring the necessary packages into scope for that script before invoking runghc.Therefore I'd suggest to add support for this workflow to cabal; Here's a simple example of what I envision:
One design question is whether such a script should auto-install the required dependencies if they're not already cached (and instead abort, telling the user which command to execute in order to populate the cache).
Another is whether cabal should compile the script into an executable, store that some nix-store-like cache (indexed by a computed hash), and run it.
The text was updated successfully, but these errors were encountered: