You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Similar to #254, we often see periods where the achieved throughput is much lower than provisioned capacity on DynamoDB. This issue is a bit of an umbrella / brain-dump.
We could use some better tools to investigate this. E.g. some logging of keys that suffer multiple retries. The histogram for retries tells me 99.9% are 0, which is nice but not very helpful.
I wonder if the "series index" added in #442 is causing trouble - the hash (partition) key is the same for every chunk for a particular user (instance). [EDIT] This index is only used to iterate through timeseries for queries that don't have a metric name. It's unusably slow.
Maybe add some more diversity to the hash key, e.g. add a hex digit derived from the sha, then you have to do 16 reads instead of 1 to scan the whole row, but those 16 will go much faster.
It looks like writes from ingester_flush.go to the chunk store do exponential back-off up to the timeout (1 minute), then error out and go back onto the flush queue, whereupon we start the exponential backoff again at 100ms. And when we start again we re-write all the keys even though just one was outstanding. So it would be better to keep trying for longer.
Update: we have moved our biggest environment back from v8 schema to v6, removing the most-contended keys. Throughput is much better.
The next most-contended (revealed by #734) keys include things like 2:d17599:kube_replicaset_status_fully_labeled_replicas 3:d17599:container_cpu_usage_seconds_total:image
Similar to #254, we often see periods where the achieved throughput is much lower than provisioned capacity on DynamoDB. This issue is a bit of an umbrella / brain-dump.
We could use some better tools to investigate this. E.g. some logging of keys that suffer multiple retries. The histogram for retries tells me 99.9% are 0, which is nice but not very helpful.
I wonder if the "series index" added in #442 is causing trouble - the hash (partition) key is the same for every chunk for a particular user (instance). [EDIT] This index is only used to iterate through timeseries for queries that don't have a metric name. It's unusably slow.
Maybe add some more diversity to the hash key, e.g. add a hex digit derived from the sha, then you have to do 16 reads instead of 1 to scan the whole row, but those 16 will go much faster.
It looks like writes from
ingester_flush.go
to the chunk store do exponential back-off up to the timeout (1 minute), then error out and go back onto the flush queue, whereupon we start the exponential backoff again at 100ms. And when we start again we re-write all the keys even though just one was outstanding. So it would be better to keep trying for longer.Related: #724
The text was updated successfully, but these errors were encountered: