-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too many open files on add with daemon #3792
Comments
Also on I'm also getting a bunch (133 in a row last time) of It seemed to be caused by loading some new hash into my browser from outside (not already in my node) but cannot reproduce. |
The flatfs warning is not a problem (it is working) but in this case it fails the add process. |
Lets go ahead and raise the default fd limit to 2048 for now. The number of nodes on the network is getting larger. Longer term solutions will be adding QUIC support, and implementing connection closing and limiting strategies |
Running the daemon with |
This should be temporarily resolved by #3828 We're working on several different solutions that will all help with this:
|
I would keep it. |
also stumbling into this right now with a ci-server I wrote that writes reports to IPFS:
unfortunately "--offline" is not an option as far as I see - is there another workaround? EDIT: just saw 0.4.8 is now out and I am on 0.4.7 - will try if the problem vanishes with 0.4.8 EDIT#2: Dam it also happens with 0.4.8:
|
@ligi when trying 0.4.8, does the daemon output the message saying its raising the file descriptor limit to 2048 ? |
yes - here the full output:
```
Initializing daemon...
Adjusting current ulimit to 2048...
Successfully raised file descriptor limit to 2048.
Swarm listening on /ip4/127.0.0.1/tcp/4001
Swarm listening on /ip4/172.17.0.1/tcp/4001
Swarm listening on /ip4/192.168.5.42/tcp/4001
Swarm listening on /ip4/82.119.11.152/tcp/4001
Swarm listening on /ip6/2a02:2450:102a:19d:1819:fa1c:abf1:586a/tcp/4001
Swarm listening on /ip6/2a02:2450:102a:19d:b963:bd27:4c80:529f/tcp/4001
Swarm listening on /ip6/2a02:2450:102a:19d:f2de:f1ff:fe9a:1365/tcp/4001
Swarm listening on /ip6/::1/tcp/4001
API server listening on /ip4/127.0.0.1/tcp/5001
Gateway (readonly) server listening on /ip4/127.0.0.1/tcp/8080
Daemon is ready
15:03:57.094 ERROR commands/h: err: open
/home/kontinuum/.ipfs/blocks/LP/put-915734068: too many open files
handler.go:288
15:04:03.748 ERROR mdns: mdns lookup error: failed to bind to any
multicast udp port mdns.go:135
^C
Received interrupt signal, shutting down...
(Hit ctrl-c again to force-shutdown the daemon.)
kontinuum@ligi-tp ~> ipfs version
ipfs version 0.4.8
```
…--
friendly greetings,
ligi
http://ligi.de
|
Getting the same error here on os x 10.11 client
daemon:
|
Got this as well today.. was following the tutorial here.. Probably related to the number of files added... happened while I was running the ipns example here.. |
Gave it a shoot, connection closing should help. Using this script to monitor deamon FD usage: export DPID=26024; watch -n0 'printf "sockets: %s\nleveldb: %s\nflatfs: %s\n" $(ls /proc/${DPID}/fd/ -l | grep "socket:" | wc -l) $(ls /proc/${DPID}/fd/ -l | grep "\\/datastore\\/" | wc -l) $(ls /proc/${DPID}/fd/ -l | grep "\\/blocks\\/" | wc -l)' where DPID is daemon PID, output looks like this:
When adding a large file( |
Got this on Version information:
Description:Trying to add a big (70MB) file to ipfs make it crash. Here is the output of the command:
On the daemon side:
Some more infos:
|
As explained above the problem is caused by creating way too many new connections when a big file is added. Workaround is to run daemon with IPFS_FD_MAX env variable set to 4k.
|
Seems to have been fixed by fixing #4102, haven't ran into it since the fix was merged. |
Whenever I try to start a private network I get ulimit error:
I've increased ulimit size via IPFS_FD_MAX, tried fresh install with declaring IPFS_PATH but didn't work. When I remove swarm.key (https://github.com/Kubuxu/go-ipfs-swarm-key-gen) I get no error. |
I don't see any ulimit errors there. It looks like go-ipfs is prematurely hitting the end of a file (probably your swarm key) when starting. Let's open another issue, this should have a better error message. |
Version: 0.4.7
Just running:
dd bs=128M if=/dev/urandom count=1 | pv -a | ipfs add
causes too many files error in 80% of cases.The text was updated successfully, but these errors were encountered: