-
Notifications
You must be signed in to change notification settings - Fork 842
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runInteractiveProcess: pipe: resource exhausted error. #1979
Comments
The OSX On the other hand the Linux Not sure if hfsevents can be designed to do away with the pipe for communicating from C to Haskell. That would be a better solution. If somehow we can get rid of both the fds altogether that will be awesome. On our end we can issue a warning to the user when needed. We know how many dirs we are watching and we can find out the |
what's the reason for so many watches? you only need one to monitor a whole tree with hfsevents. does fsnotify add one for each subdirectory, or are there really that many roots? |
We watch all the parent dirs of files involved in the build. I am not sure if there is a specific motivation for watching immediate dirs of the files. @mgsloan or @snoyberg can perhaps answer that. My guess is that its a natural first design and not optimized because there was no need. Maybe we can improve it by finding common ancestors of dirs as long as the ancestor is within the package's top level dir. That way we may have to filter more spurious events but that may not be a problem. Though in my opinion we should still fix the hfsevents package, if we can, so that it is robust and scalable irrespective of the application design. |
Oh I agree that hfsevents should use fewer fds with the default limit as low as it is, I was just wondering if there is something else in the Stack approach (or in its dependencies) that is not optimal. Something that may be lead to a quicker fix here. |
Yeah, a change in the stack code will also be useful for the Linux case where I do not see a way to reduce the fds. Linux |
Just wanted to note that I ran into this today with
Running |
@angerman I think that's coming from stack invoking ghc-pkg. The errors get outputted due to an invocation of {-# LANGUAGE ScopedTypeVariables #-}
import Control.Monad
import Data.ByteString (ByteString)
import Data.Conduit
import Data.Conduit.Process ()
import Data.Streaming.Process
main :: IO ()
main = replicateM_ (1000 * 1000) $
withCheckedProcess
(proc "true" [])
(\ClosedStream (_ :: Source IO ByteString) (_ :: Source IO ByteString) -> return ()) So, I think this is indeed the same issue of having too many file watch fds. I think you just ran out of the resource in a different spot than usual |
I've just run into this as well. I think may be related to using a large number of precompiled packages from another snapshot (I see those messages @angerman's comment as well). In this case, I build In my case, file-watch is not involved at all, so I guess that there is a file descriptor leak when it copies a precompiled package to a new snapshot. |
…essStderrStdout Fix FD leak in sinkProcessStderrStdout (fixes #1979)
I am using Stack on Mac El Capitan with resolver lt-5.10. I ran into
stack build --file-watch
and it ran a few times before giving this errors:
user error (could not create file system event stream)
and
/usr/local/bin/ghc-pkg: streamingProcess: runInteractiveProcess: pipe: resource exhausted (Too many open files)
after running a few successful sessions. I am not totally sure if the default limit set was too low or the file-watch just does not clean up things as it should. In any case, I decided to fix it by increasing the ulimit by following the steps in this blog:
http://blog.dekstroza.io/ulimit-shenanigans-on-osx-el-capitan/
It gets back to a working state again. But again, not totally sure if this is the right solution. Thanks for your help.
The text was updated successfully, but these errors were encountered: