-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open Files limit ? #164
Comments
After I searched the keywords "Too many open files" in source files, I don't find any things. Some information links: To change the system setting seems not the best way to solve this problem. Thanks. |
Can I know what is the output when you run |
Hi yhchiang The output is core file size (blocks, -c) 0 The default number of max opened files is 256, it seems too be small, I will try use large setting. Another my question is when the program run a long time, and there are many new data put into db, the number of un-closed files will increase or it will be in some range? |
@yuchi518 check out options.max_open_files. However, your DB seems to have a lot of files. You might want to increase the files sizes. Check out this function, might be useful: https://github.com/facebook/rocksdb/blob/master/include/rocksdb/options.h#L109 |
Hi @igorcanadi OK Thanks |
@yuchi518 when u use this |
Very thanks your suggestion. I changed them in /etc/sysctl.conf file with following setting But adding more data, these value should be updated again, doesn't it? |
@yuchi518 |
Closing this issue, but feel free to reopen it if it's not resolved. |
hi @yuchi518 did you slove this problem?i across this problem now,it occured when i insert many new data ,i set the ulimit ,but i think if i insert more than current data,it will be occured. |
Hi @haochun, Yes, it should always occurs when more data are inserted. haochun [email protected] 於 2014年11月17日星期一寫道:
|
FWIW complete rocks newbie., here, using all default settings, wrote a "small" dataset, no more than a few million records, uses half-a-GB when on disk. After about five minutes, I hit (Actually, what I really get is silent data corruption; during debugging, I tried closing and reopening rocks, the reopen fails either with too many open files, or with a complaint that some sst file cannot be found. The open files seem to stay open forever, even after I close the rocks DB! At this point, the only choice is to just exit the app. (During the failure, the DB blows up from 500MB go 39GB in size) When I exit the app and restart, it seems like rocks has the pre-data-corruption data in it. So it seems that rocks is ignoring the Note also: the number of |
unbreak benchmarks
Apparently, it's possible that we pass a null cloud_manifest_ when calling RemapFilename. So we need to assert cloud_manifest only if necessary.
Hi,
I got an error message
org.rocksdb.RocksDBException: IO error: /Volumes/Backup/pbf/planet-140514/nodes/021806.log: Too many open files
at org.rocksdb.RocksDB.put(Native Method)
at org.rocksdb.RocksDB.put(RocksDB.java:139)
Does the max open files have any limits ?
I open rocksdb with following options:
options = new Options();
options.setCreateIfMissing(true);
options.setMaxOpenFiles(-1);
options.setAllowMmapReads(false);
options.setAllowMmapWrites(false);
options.setMaxWriteBufferNumber(4);
When error occurs, the db folder contains about 9620 sst files and folder size is about 19GB.
The version of RocksDB is 3.0.
Does any one have any suggestion?
Thanks.
The text was updated successfully, but these errors were encountered: