Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open Files limit ? #164

Closed
yuchi518 opened this issue Jun 1, 2014 · 13 comments
Closed

Open Files limit ? #164

yuchi518 opened this issue Jun 1, 2014 · 13 comments

Comments

@yuchi518
Copy link
Contributor

yuchi518 commented Jun 1, 2014

Hi,

I got an error message

org.rocksdb.RocksDBException: IO error: /Volumes/Backup/pbf/planet-140514/nodes/021806.log: Too many open files
at org.rocksdb.RocksDB.put(Native Method)
at org.rocksdb.RocksDB.put(RocksDB.java:139)

Does the max open files have any limits ?

I open rocksdb with following options:

options = new Options();
options.setCreateIfMissing(true);
options.setMaxOpenFiles(-1);
options.setAllowMmapReads(false);
options.setAllowMmapWrites(false);
options.setMaxWriteBufferNumber(4);

When error occurs, the db folder contains about 9620 sst files and folder size is about 19GB.
The version of RocksDB is 3.0.

Does any one have any suggestion?

Thanks.

@yuchi518
Copy link
Contributor Author

yuchi518 commented Jun 2, 2014

After I searched the keywords "Too many open files" in source files, I don't find any things.
I found this exception should throw by OS X system.

Some information links:
http://superuser.com/questions/433746/is-there-a-fix-for-the-too-many-open-files-in-system-error-on-os-x-10-7-1
And many leveled users encounter same issue, I list two links only:
https://github.com/bitcoinjs/bitcoinjs-server/issues/55
https://code.google.com/p/leveldb/issues/detail?id=175

To change the system setting seems not the best way to solve this problem.
Do you have any suggestions to force to db to close opened files ?

Thanks.

@yhchiang
Copy link
Contributor

yhchiang commented Jun 2, 2014

Can I know what is the output when you run ulimit -a on your OS X command line? In case it shows a low maximum open files, you can change it by ulimit -n <your_max_open_files>. For instance, ulimit -n 2048 allows you to open 2048 files at a time.

@yuchi518
Copy link
Contributor Author

yuchi518 commented Jun 2, 2014

Hi yhchiang

The output is

core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited

The default number of max opened files is 256, it seems too be small, I will try use large setting.
Very thanks for your suggestion.

Another my question is when the program run a long time, and there are many new data put into db, the number of un-closed files will increase or it will be in some range?

@igorcanadi
Copy link
Collaborator

@yuchi518 check out options.max_open_files. However, your DB seems to have a lot of files. You might want to increase the files sizes. Check out this function, might be useful: https://github.com/facebook/rocksdb/blob/master/include/rocksdb/options.h#L109

@yuchi518
Copy link
Contributor Author

yuchi518 commented Jun 2, 2014

Hi @igorcanadi

OK
I will try to study that function.

Thanks

@AaronCsy
Copy link

@yuchi518 when u use this ulimit -n <your_max_open_files> , this is only implemented in current shell, pls pay attention it. Or u can change them forever, go to /etc/security/limits.conf , add
* - nofile 65536, then reboot you can see it always be changed

@yuchi518
Copy link
Contributor Author

@AaronCsy

Very thanks your suggestion.

I changed them in /etc/sysctl.conf file with following setting
kern.maxfiles=20480
kern.maxfilesperproc=18000

But adding more data, these value should be updated again, doesn't it?

@AaronCsy
Copy link

@yuchi518
Yeah, I think so.

@yhchiang
Copy link
Contributor

yhchiang commented Aug 1, 2014

Closing this issue, but feel free to reopen it if it's not resolved.

@yhchiang yhchiang closed this as completed Aug 1, 2014
@haochun
Copy link

haochun commented Nov 17, 2014

hi @yuchi518 did you slove this problem?i across this problem now,it occured when i insert many new data ,i set the ulimit ,but i think if i insert more than current data,it will be occured.

@yuchi518
Copy link
Contributor Author

Hi @haochun,

Yes, it should always occurs when more data are inserted.
Sometimes I think rocksdb is not suitable for big data, not like other
NoSQL db.

haochun [email protected] 於 2014年11月17日星期一寫道:

hi @yuchi518 https://github.com/yuchi518 did you slove this problem?i
across this problem now,it occured when i insert many new data ,i set the
ulimit ,but i think if i insert more than current data,it will be occured.


Reply to this email directly or view it on GitHub
#164 (comment).

@fyrz
Copy link
Contributor

fyrz commented Nov 17, 2014

@haochun @yuchi518 no you can set that setting also to unlimited in addition you can control how many handles are open using options.

@linas
Copy link

linas commented Apr 11, 2021

FWIW complete rocks newbie., here, using all default settings, wrote a "small" dataset, no more than a few million records, uses half-a-GB when on disk. After about five minutes, I hit Too many open files with about 980 sst files, which are all open when I count them with lsof -p pid |grep sst | wc.

(Actually, what I really get is silent data corruption; during debugging, I tried closing and reopening rocks, the reopen fails either with too many open files, or with a complaint that some sst file cannot be found. The open files seem to stay open forever, even after I close the rocks DB! At this point, the only choice is to just exit the app. (During the failure, the DB blows up from 500MB go 39GB in size) When I exit the app and restart, it seems like rocks has the pre-data-corruption data in it. So it seems that rocks is ignoring the getrlimit(RLIMIT_NOFILE) setting (which is 1024 for me), blowing right past it, and then silently corrupting data when it is no longer getting valid file handles from the OS. This is rocks version 5.17 on ubuntu focal 20.04 stable))

Note also: the number of sst files correlates directly with RAM usage. When I set ulimit -n 4096 then the number of sst files shoots up to several thousand, and RAM usage shoots up to max RAM installed on machine. Setting rocksdb options.max_open_files=300 limits sst files to about 170 and RAM usage to about 20GB. More info here: #3216 (comment)

Nazgolze pushed a commit to Nazgolze/rocksdb-1 that referenced this issue Sep 21, 2021
Nazgolze pushed a commit to Nazgolze/rocksdb-1 that referenced this issue Sep 21, 2021
luky116 pushed a commit to luky116/rocksdb that referenced this issue Jul 10, 2024
Apparently, it's possible that we pass a null cloud_manifest_ when calling RemapFilename. So we need to assert cloud_manifest only if necessary.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants