-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Capacity Unit Read/Write Statistics #235
Comments
需求:
CU计算:
持久化:
stat表设计(仅供参考):
另外一种设计(更压缩):
统计月度账单:
优化点:
|
需要实现的主要功能:
stat表设计:
stat表数据量估算: |
数据存储如果使用json太浪费,可以考虑使用thrift + tcompact + zstd |
为了更准确的数据去重,在perf_counters::take_snapshot() 记录更新counter数据的timestamp,list_snapshot_by_regexp中得到的perf_counter_info.timestamp也应该是最近更新的timestamp而不是当前时间。 |
上面整体看起来基本问题不大了,就是有这么几点:
不过存储到Pegasus的数据格式是怎样,对现在收集数据的过程影响不大,是解耦合的。所以可以先把数据收集这块尽快实现了。 |
更新:不使用压缩,同一个表的QPS先汇总起来,减少体积,且只输出QPS不为0的数据。
|
计量规则读吞吐量:Get 单行读操作:
BatchGet多行读操作:
MultiGet多行读操作:
SortKeyCount操作:
TTL操作:
Scan操作:
写吞吐量:单行Set操作:a. 写成功,则消耗写CU = 单行HashKey+SortKey+Value数据大小之和除以4KB向上取整。 单行Del操作:a. 写成功,则消耗写CU = 单行HashKey+SortKey数据大小之和除以4KB向上取整。 BatchSet/BatchDel操作:a. 消耗的读写CU为所有单行操作消耗的CU之和。 MultiSet操作:a. 写成功,则消耗写CU = 所有行的SortKey+Value数据大小之和除以4KB向上取整。 MultiDel操作:a. 写成功,则消耗写CU = 所有行的SortKey数据大小之和除以4KB向上取整。 incr操作:a. 返回操作成功后的新值,则消耗读CU = 1,写CU = 1。 check_and_set操作:a. 返回setSucceed,则消耗读CU = 1,写CU = Value数据大小除以4KB向上取整。 check_and_mutate操作:a. 返回setSucceed,则消耗读CU = 1,写CU = MutationList中所有行的SortKey+Value数据大小之和除以4KB向上取整。 |
Now we already supported read/write QPS statistics, but it is not enough for pricing.
Like Aliyun Table Store and AWS DynamoDB, we should support Capacity Unit (CU) read/write statistics.
Comparation:
The text was updated successfully, but these errors were encountered: