Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while trying to load 100GB #2

Open
korchix opened this issue Jan 21, 2015 · 3 comments
Open

Error while trying to load 100GB #2

korchix opened this issue Jan 21, 2015 · 3 comments

Comments

@korchix
Copy link

korchix commented Jan 21, 2015

Hello arnaud, can you please help me with that.
i tried to load 100GB (recordcount=1000000000) but when the database "usertable" reached the size of 54.2GB i get this error

2015-01-21 04:26:37:989 41194 sec: 16854033 operations; 194.68 current ops/sec; [INSERT AverageLatency(us)=2733959.07]
2015-01-21 04:26:47:990 41204 sec: 16854660 operations; 62.69 current ops/sec; [INSERT AverageLatency(us)=6259953.14]
2015-01-21 04:26:57:991 41214 sec: 16855160 operations; 50 current ops/sec; [INSERT AverageLatency(us)=9590113.89]
2015-01-21 04:27:07:991 41224 sec: 16855199 operations; 3.9 current ops/sec; [INSERT AverageLatency(us)=9259886.05]
couchdbBinding.java.NoNodeReacheableException
at couchdbBinding.java.LoadBalancedConnector.create(LoadBalancedConnector.java:128)
at couchdbBinding.java.CouchdbClient.executeWriteOperation(CouchdbClient.java:121)
at couchdbBinding.java.CouchdbClient.insert(CouchdbClient.java:254)
at com.yahoo.ycsb.DBWrapper.insert(DBWrapper.java:148)
at com.yahoo.ycsb.workloads.CoreWorkload.doInsert(CoreWorkload.java:461)
at com.yahoo.ycsb.ClientThread.run(Client.java:277)

could you please tell me what i did wrong or do i forget to change somthing bevor running the test.
thank you in advance for your help

@arnaudsjs
Copy link
Owner

I think the configuration is fine. Are you sure that Couchdb is not crashing for some reason (e.g. running out of memory, running out of disk space)? What does the log files tell you? Note that couchdb is equipped with a watchdog, so if the Couchdb crashes it will automatically restart. So, after the YCSB crash it might seem to be the case that couchdb is running fine, while it might have recovered from a crash a few seconds ago.

@korchix
Copy link
Author

korchix commented Jan 27, 2015

couchdb didn't crash, it works normal, just YCSB did not insert data into couchdb and now where you talk about it, i think that was because "running out of memory" .
now i will use another powerfull pc in a local network to run just the YCSB-Client and i hope it will resolve this probleme, but i have a question about the "Throughput(ops/sec)". couchdb seems to be toooooo slow compared to mongodb. the throughput in couchdb was maximum 2000 in mongodb is almost 9000 !!!
did couchdb has any option to make it faster, i googled this "sync()" for couchdb and i didn't find anything about it.
thank you for your help !

@arnaudsjs
Copy link
Owner

In your etc/couchdb/default.ini file there should be an option "delayed_commits = false" within the "[couchdb]" section. By default this should be false. This means that Couchdb will confirm a write operation as successful to the client even if the data is not written to disk. This may cause data loss in case of a server crash but increases performance. If - in your setup- this option is set to true, you can increase performance be setting it to false and restarting the Couchdb service.

However, I did benchmark testing with Couchdb as well as Mongodb and it turns out that Mongodb is indeed a faster database, so the result doesn't seem to be abnormal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants