Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not require .cabal for all packages in stack.yaml when running stack runghc #1722

Closed
harendra-kumar opened this issue Feb 1, 2016 · 8 comments

Comments

@harendra-kumar
Copy link
Collaborator

I saw this script in the pinchot package. It is moving stack.yaml out of the way before invoking a script using stack runghc.

When we use runghc on a file inside a project dir do we need settings from stack.yaml? Or can we just avoid reading it?

This will allow us to freely invoke scripts inside the project dir. For example the pinchot package is generating .cabal file using a script. When the script is run via stack runghc it finds a stack.yaml but no .cabal so it fails when trying to process the stack.yaml.

@mgsloan
Copy link
Contributor

mgsloan commented Feb 2, 2016

It is necessary to read stack.yaml for stack runghc, because it determines the package databases / PATH / etc which are used with it. However, there isn't any reason for stack runghc to demand that all listed packages have cabal files. It seems to me like the right solution to this is to only do this check (and similar checks, if any) when the packages list is actually needed.

@harendra-kumar harendra-kumar changed the title Do not read stack.yaml for stack runghc? Do not require .cabal for all packages in stack.yaml when running stack runghc Feb 2, 2016
@harendra-kumar
Copy link
Collaborator Author

worth doing for use cases like this one?

@mgsloan
Copy link
Contributor

mgsloan commented Feb 2, 2016

Yeah, it seems reasonable to me! It also means that stack exec in general will work even if cabal files aren't there.

We will have to figure out how to draw the line of when things get validated. I'd say that all validation for the stack.yaml that's purely based on its contents should happen up-front. Validation that requires some filesystem state, such as the presence of cabal files, should happen lazily.

@luigy
Copy link
Contributor

luigy commented Mar 1, 2016

#!/bin/sh

mv stack.yaml stack-old.yaml
stack --resolver=lts-3.14 runghc genCabal.hs --package cartel --install-ghc > pinchot.cabal
mv stack-old.yaml stack.yaml

In the example above this happens because --package cartel triggers a Stack.Build.build which tries to load local packages and errors out even though they are not wanted. I think this is another example that sometimes we need loadSourceMap to return warnings and not error out or for us to catch them and handle them accordingly. An exec without --package TARGET should not trigger that behavior, though.

In a way, it seems that sometimes we want a way to force stack to use the global config or a way to tell it ignore local packages because we are only going to work with snapshot. Maybe we can have stack path --global-config-location and then
stack --stack-yaml "$(stack path --global-config-location)" runghc args

We will have to figure out how to draw the line of when things get validated. I'd say that all validation for the stack.yaml that's purely based on its contents should happen up-front. Validation that requires some filesystem state, such as the presence of cabal files, should happen lazily.

stack exec --package imalocalpkg-0.2.3 echo foo
the fact that targets passed to --package could also be a local package complicate things a bit
So we need to read the cabal files to even know it's name or version, right? since these are not explicitly mentioned in the config

We can probably take an simpler optimistic route of returning warnings from loadSourceMap(modifying accordingly the code that handle the loading of local packages like getLocalPackageViews) and proceed to trying to create a plan and make it clear that it took this route. Maybe have a flag if people want the pendantic behavior of everything being available

@mgsloan
Copy link
Contributor

mgsloan commented Mar 2, 2016

@luigy Yep, this is indeed related to our discussion on the PR for checking extra-deps #1858 (comment) . We should always warn about such things, but allow things to work if they don't actually require any of the missing packages.

In a way, it seems that sometimes we want a way to force stack to use the global config or a way to tell it ignore local packages because we are only going to work with snapshot.

It's inadvisable to use the global config for build scripts. Something like --global-project could be useful for other things, though. That might be preferred to cding elsewhere or using --stack-yaml ~/.stack/global-project/stack.yaml. I haven't thought about it very hard, but it's possible that --no-project would be useful, somewhat related to the potential --standalone flag.

I'd rather not ignore local packages, since they may be needed for generating cabal files. Or, in the case of #1521, installing extra packages to the DB.

the fact that targets passed to --package could also be a local package complicate things a bit
So we need to read the cabal files to even know it's name or version, right? since these are not explicitly mentioned in the config

For knowing which packages are present, it's sufficient to just check the cabal file name, since stack enforces that it match the package name.

We can probably take an simpler optimistic route of returning warnings from loadSourceMap(modifying accordingly the code that handle the loading of local packages like getLocalPackageViews) and proceed to trying to create a plan and make it clear that it took this route. Maybe have a flag if people want the pendantic behavior of everything being available

Sounds good to me! If any packages are missing, then it should definitely error out if the user didn't explicitly specify targets (STLocalAll) . In other words stack build should build all local packages.

@luigy
Copy link
Contributor

luigy commented Mar 2, 2016

I haven't thought about it very hard, but it's possible that --no-project would be useful, somewhat related to the potential --standalone flag.

👍 indeed

I'd rather not ignore local packages, since they may be needed for generating cabal files. Or, in the case of #1521, installing extra packages to the DB.

yeah, the only case that came to mind was for a script like the one referenced above. The flag would of just behaved as if the user set it's packages to [], but it's all just a hack to get a package into the snapshot db. In any case the --global-project/--no-project/--standalone would be sufficient for that

For knowing which packages are present, it's sufficient to just check the cabal file name, since stack enforces that it match the package name.

Yep, that is assuming that the location of the package in the config even exits. So pretty much when a target is passed via --package most(all?) bets of validating cabal files lazily are off since they could be asking for a local package and the only info we have is the location

Sounds good to me! If any packages are missing, then it should definitely error out if the user didn't explicitly specify targets (STLocalAll) . In other words stack build should build all local packages.

👍

@mgsloan
Copy link
Contributor

mgsloan commented Mar 2, 2016

Yep, that is assuming that the location of the package in the config even exits. So pretty much when a target is passed via --package most(all?) bets of validating cabal files lazily are off since they could be asking for a local package and the only info we have is the location

True. I think it'd be fine to just give a warning in the case that:

  1. The user requests a --package without a version number and it's found in the snapshot

  2. Some cabal files are missing

This is getting a bit complicated. @harendra-kumar @luigy , thoughts on whether this complexity is worth it? I'm starting to think it isn't. While this is theoretically feasible, I think most people would be surprised by this feature. Projects that need to generate their own cabal files are going to have their own build scripts anyway, and they can use --stack-yaml to point to a stack.yaml file that works before the cabal files are present.

@mgsloan
Copy link
Contributor

mgsloan commented Mar 8, 2017

Ah, this actually turned out to not be so complicated. It got fixed as a side effect of fd6d45 !

@mgsloan mgsloan closed this as completed Mar 8, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants