Replies: 1 comment 9 replies
-
|
Beta Was this translation helpful? Give feedback.
9 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Thank you very much for your reply about performance issue. However, in the current testing, I have encountered some issues regarding eviction, and I would like to discuss them.
The value database.hard.max_keys serves as the maximum number of buckets for the hash table (the maximum number is statically allocated in advance, and there is currently no resize hash functionality). When all buckets are allocated, it becomes impossible to find an available bucket, resulting in insertion failure, which effectively limits the maximum number of keys. However, the configuration memory.limits.hard.max_memory_usage does not seem to truly limit the maximum available memory. When the memory usage exceeds memory.limits.soft.max_memory_usage, eviction begins to work. If the insertion rate is faster than the eviction rate at this time, the memory usage will still gradually increase and exceed the limit set by memory.limits.hard.max_memory_usage, leading to actual memory usage surpassing this configured value. From the actual operation of my code, the memory continues to increase and exceeds the value of memory.limits.hard.max_memory_usage. However, in practical production environments, memory.limits.hard.max_memory_usage is more useful than database.hard.max_keys because the size of the values inserted by users varies greatly. Users often do not know what value to configure for database.hard.max_keys, but they know how much maximum available memory memory.limits.hard.max_memory_usage should be set to, for example, 80% of the system memory. Therefore, it is hoped that memory.limits.hard.max_memory_usage can effectively limit the maximum available memory for actual data size. When the actual memory usage exceeds this value, insertion should report an error, and wait for eviction to continuously reduce memory usage below memory.limits.hard.max_memory_usage before allowing further data insertion.
The value of the metric cachegrand_db_keys_count displayed in Prometheus requests shows a significant discrepancy compared to the number of keys observed using the keys * command in the Redis client. What could be causing this issue?
In the main branch, when running the stress testing program, the cachegrand log starts to report errors after a while when evictions occur, and this error happens every time. It is hoped that this error will not be carried over into version 0.5.
[2025-01-14T11:04:42Z][ERROR ][worker][id: 12][cpu: 12][transaction_rwspinlock] Possible transactional spinlock stuck detected for thread 540153 in src/data_structures/hashtable/mcmp/hashtable_op_get.c:200 [2025-01-14T11:04:43Z][ERROR ][worker][id: 11][cpu: 11][transaction_rwspinlock] Possible transactional spinlock stuck detected for thread 540152 in src/data_structures/hashtable/mcmp/hashtable_op_delete.c:278
Beta Was this translation helpful? Give feedback.
All reactions