-
Notifications
You must be signed in to change notification settings - Fork 9.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Best practices for automating compaction / defrag #7607
Comments
It depends on your application. If your application is OK with 1hr duration, do 1hr. The shorter duration the better. The limit factor is the total db size though, do not let it grow more than 2GB if you use a normal cloud machine.
This is actually about disk fragmentation. But if you do not suddenly remove a lot of keys and want to reclaim the disk space immediately, you do not need to defrag. |
Can you share the benchmark result? |
How did you figure out the cache size has an impact over the benchmark result? I want to see a bench perf over cache size graph. Also note that the more keys you put into etcd the less throughput you might get if you run etcd on slow hdd. The level of btree grows so more io will be needed. |
I guess I should clarify. Etcd seems to quickly gobble up available memory for caching purposes. Once all available memory has been allocated to cache, we start seeing a lot of cache evictions, which is expected. The result of the constant evictions however, is failcnts and slower response times. The results below are from a cluster that has been allocated just enough memory to not have to force cache evictions when running the benchmark. I can work on creating a better comparison, but may take a bit due to other obligations. The AWS instance types I have been testing in are: The only resource limitation I am enforcing via cgroups is memory. There doesn't seem to be any significant performance differences between the two instance types. Both have pretty solid I/O though. |
The avg latency is very high for 1.5k throughput. I am more interested in the comparison. I want to see the bad impact of cache. The evicted memory should on the old tree node. If you do seq write, the tree nodes should be all in memory (xMB should be far than enough), I am surprised it has an observable impact. |
|
@gyuho,hi,What will happen if no defrag proceed, Only no free for disk space? Does it influence the performance of etcd cluster? |
Reading through: https://github.com/coreos/etcd/blob/master/Documentation/op-guide/maintenance.md
Looking at compaction:
What metrics should we be using to determine when compaction is necessary or at the very least a good idea?
What metrics should be used to monitor fragmentation? Initially I assumed that monitoring HeapAlloc and HeapInuse would get me close, but seems I have mistaken.
Any thoughts or advise?
Thanks in advance
.
The text was updated successfully, but these errors were encountered: