-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New stable, fixed-length cache keys #9126
Conversation
Summary: This change standardizes on a new 16-byte cache key format for block cache (incl compressed and secondary) and persistent cache (but not table cache and row cache). The goal is a really fast cache key with practically ideal stability and uniqueness properties. A fixed key size of 16 bytes should enable future optimizations to the concurrent hash table for block cache, which is a heavy CPU user / bottleneck. TODO: more Test Plan: TODO
The QPS improvement is explicit. Looks like the legacy cache key generation really causes kind of heavy CPU overhead. |
// max_offset is based on file size (see WithOffset) and is required here to | ||
// choose an appropriate (sub-)encoding. This constructor never generates an | ||
// "empty" base key. | ||
OffsetableCacheKey(const std::string &db_id, const std::string &db_session_id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically, the OffsetableCacheKey is used per SST file. For different SST file, we need to have different OffsetableCacheKey instance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, one instance for each SST file. So even though there's some hashing and digit parsing for building OffsetableCacheKey, this only happens on table file open. The WithOffset step is really fast.
// | ||
// This class "is a" CacheKey only privately so that it is not misused as | ||
// a ready-to-use CacheKey. | ||
class OffsetableCacheKey : private CacheKey { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we close and reopen an DB, even block cache and secondary cache has no change, the cache key for the same block in the same SST file will be different right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the newer SST file case, it is the same since we use the session, file, and db_id from the file property. But if it is blocks from existing SST file, we will not be able to find them in both cache.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will be stable on newer SST files. See BlockBasedTable::SetupBaseCacheKey.
// value because of the case of VersionSet::Recover opening some table | ||
// files and later setting the DB ID. So we just rely on uniqueness | ||
// level provided by session ID. | ||
db_id = "unknown"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we make "unknown" as a static extern? Maybe it will be used in more places
rep_->compressed_cache_key_prefix_size, handle, | ||
compressed_cache_key); | ||
} | ||
key_data = GetCacheKey(rep_->base_cache_key, handle); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add a unit test to check the current scheme handles both compressed and regular key in a desired way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, the PR looks good to me. Just left 2 minor comments.
// CacheKeys are currently used by calling AsSlice() to pass as a key to | ||
// Cache. For performance, the keys are endianness-dependent (though otherwise | ||
// portable). (Persistable cache entries are not intended to cross platforms.) | ||
class CacheKey { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this be more efficient to move the two fields in a struct and have the Slice be created once (rather than on every call)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are generally calling a virtual function expecting Slice& (not Slice), your suggestion is probably more efficient. 👍
I was probably thinking ahead to a future when block cache can take CacheKey& as key. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, on construction, we usually use the CacheKey once. If you want to use it more than once, you can save the Slice.
// max_offset is based on file size (see WithOffset) and is required here to | ||
// choose an appropriate (sub-)encoding. This constructor never generates an | ||
// "empty" base key. | ||
OffsetableCacheKey(const std::string &db_id, const std::string &db_session_id, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For table keys, should they also contain the "compression type" (in addition to the offset)? What happens if the block_cache == block_cache_compressed -- wouldn't both the compressed and uncompressed keys map to the same key and chaos would ensue? I do not think there is anything preventing both types of cache from being the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good observation. I guess I didn't think of that because it would be such a bad configuration under the current caching infrastructure. (Compressed cache entries would be completely useless and crowd out uncompressed entries, because uncompressed entries would get any hits first. Compressed entries guaranteed ejected before (or immediately adjacent to) corresponding uncompressed entry.)
Long term, I would like to see the Cache 'typed' so that the type of entry is sufficient to ensure logical uniqueness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, this is a pre-existing issue and this change does not make it better or worse. I've posted #9172 for protecting against the problem though.
My main concern with this design is the disconnect between cache keys and SST unique IDs (still marked "experimental"). Although both are based on session id, db id, and file number, this cache key "squeezes together" the session id and file number to ensure space in 128 bits for the offset within the file. In extreme cases, this "squeezing together" can harm the uniqueness to where it's not really using the full power of the 128 bits. The cache key encoding is also dependent on file size (in extreme cases), which is not available in TableProperties, which is a convenient precursor for getting SST unique id. Possible solutions:
[*] Assuming no coordination between processes running RocksDB is relied upon. See https://github.com/pdillinger/unique_id |
But I seem to be running up against the limits of clever, and clever carries complexity. |
... except it would mean that in order to have a distinctive set of cache key prefixes for a DB, we would need a prefix for each SST file rather than (approximately) one prefix per original session ID (among live files). This is probably not a big deal because cache dumping should be a rare operation, and we could always represent the set with a Bloom(-like) filter if we want extra space & time savings. |
I just modified / simplified to a fixed 24-byte cache key, and performance was worse than baseline: 76615.1 ops/sec vs. 79015.6 |
@pdillinger has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
DEFINE_bool(stress_cache_key, false, "If true, run cache key stress test instead"); | ||
DEFINE_uint32(sck_files_per_day, 2500000, | ||
"(-stress_cache_key) Simulated files generated per day"); | ||
DEFINE_uint32(sck_duration, 90, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you call this something other than "duration"? Duration in db_bench means how long to run the tests whereas here it means how many days to simulate. "sck_days" or something like that might be a better name.
sck_db_count, 100, | ||
"(-stress_cache_key) Parallel DBs in operation"); | ||
DEFINE_uint32(sck_table_bits, 20, | ||
"(-stress_cache_key) Log2 number of tracked files"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide a better description. I am not sure what "table_bits" has to do with "number of tracked files".
"(-stress_cache_key) Log2 number of tracked files"); | ||
DEFINE_uint32( | ||
sck_keep_bits, 50, | ||
"(-stress_cache_key) Number of cache key bits to keep"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide a better description. I am not sure what "number of bits to keep" is supposed to mean.
stripped = GetSliceHash64(ck.AsSlice()) >> shift_away; | ||
} else { | ||
uint32_t a = DecodeFixed32(ck.AsSlice().data()) << shift_away_a; | ||
uint32_t b = DecodeFixed32(ck.AsSlice().data() + 12) >> shift_away_b; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be better to have a Slice cks = ck.AsSlice()? Wondering the overhead of creating/using extra slices.
} else { | ||
uint32_t a = DecodeFixed32(ck.AsSlice().data()) << shift_away_a; | ||
uint32_t b = DecodeFixed32(ck.AsSlice().data() + 12) >> shift_away_b; | ||
stripped = (uint64_t{a} << 32) + b; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a hashing method that combines two values we should be using instead?
} | ||
if (stripped == 0) { | ||
// Unlikely, but we need to exclude tracking this value | ||
printf("Hit Zero! \n"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be to stderr instead? Wondering if it will get lost if it is hit...
ResetProcess(); | ||
} else { | ||
// Replace | ||
table_[pos] = stripped; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't this just guarantee that two successive hashes to the same pos are not the same? Is it possible that there is an intervening one that would cause something like [A, B, A] where the collision was not detected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is simulation of various files and their cache entries kept for various lifetimes. There is no collision in practice if there's no overlap in lifetime (on disk UNION in cache).
uint32_t shift_away = 64 - FLAGS_sck_keep_bits; | ||
uint32_t shift_away_b = shift_away / 3; | ||
uint32_t shift_away_a = shift_away - shift_away_b; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a comment here to explain the math. Not sure what the shift_away / 3 is for.
uint32_t report_count = 0; | ||
uint32_t collisions_this_run = 0; | ||
// Round robin through DBs | ||
for (size_t db_i = 0;; ++db_i) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason to not also have this for conditional/update file_count?
@mrambacher My plan is to land this to get it going through downstream testing, with the simulation in a somewhat dirty state, and follow up with some improvements to comments, more unit tests, etc. Thanks for the input. |
Summary: After facebook#9126 Test Plan: CI
Summary: Follow-up to #9126 Added new unit tests to validate some of the claims of guaranteed uniqueness within certain large bounds. Also cleaned up the cache_bench -stress-cache-key tool with better comments and description. Pull Request resolved: #9329 Test Plan: no changes to production code Reviewed By: mrambacher Differential Revision: D33269328 Pulled By: pdillinger fbshipit-source-id: 3a2b684a6b2b15f79dc872e563e3d16563be26de
Summary: This change standardizes on a new 16-byte cache key format for
block cache (incl compressed and secondary) and persistent cache (but
not table cache and row cache).
The goal is a really fast cache key with practically ideal stability and
uniqueness properties without external dependencies (e.g. from FileSystem).
A fixed key size of 16 bytes should enable future optimizations to the
concurrent hash table for block cache, which is a heavy CPU user /
bottleneck, but there appears to be measurable performance improvement
even with no changes to LRUCache.
This change replaces a lot of disjointed and ugly code handling cache
keys with calls to a simple, clean new internal API (cache_key.h).
(Preserving the old cache key logic under an option would be very ugly
and likely negate the performance gain of the new approach. Complete
replacement carries some inherent risk, but I think that's acceptable
with sufficient analysis and testing.)
The scheme for encoding new cache keys is complicated but explained
in cache_key.cc.
Also: EndianSwapValue is moved to math.h to be next to other bit
operations. (Explains some new include "math.h".) ReverseBits operation
added and unit tests added to hash_test for both.
Fixes #7405 (presuming a root cause)
Test Plan:
Basic correctness
Several tests needed updates to work with the new functionality, mostly
because we are no longer relying on filesystem for stable cache keys
so table builders & readers need more context info to agree on cache
keys. This functionality is so core, a huge number of existing tests
exercise the cache key functionality.
Performance
Create db with
TEST_TMPDIR=/dev/shm ./db_bench -bloom_bits=10 -benchmarks=fillrandom -num=3000000 -partition_index_and_filters
And test performance with
TEST_TMPDIR=/dev/shm ./db_bench -readonly -use_existing_db -bloom_bits=10 -benchmarks=readrandom -num=3000000 -duration=30 -cache_index_and_filter_blocks -cache_size=250000 -threads=4
using DEBUG_LEVEL=0 and simultaneous before & after runs.
Before ops/sec, avg over 100 runs: 121924
After ops/sec, avg over 100 runs: 125385 (+2.8%)
Collision probability
I have built a tool, ./cache_bench -stress_cache_key to broadly simulate host-wide cache activity
over many months, by making some pessimistic simplifying assumptions:
We use a simple table with skewed address assignment and replacement on address collision
to simulate files coming & going, with quite a variance (super-Poisson) in ages. Some output
with
./cache_bench -stress_cache_key -sck_keep_bits=40
:These come from default settings of 2.5M files per day of 32 MB each, and
-sck_keep_bits=40
means that to represent a single file, we are only keeping 40 bits ofthe 128-bit cache key. With file size of 2**25 contiguous keys (pessimistic), our simulation
is about 2**(128-40-25) or about 9 billion billion times more prone to collision than reality.
More default assumptions, relatively pessimistic:
every 100 files generated
After enough data, we get a result at the end:
If we believe the (pessimistic) simulation and the mathematical generalization, we would need to run a billion machines all for 97 billion days to expect a cache key collision. To help verify that our generalization ("corrected") is robust, we can make our simulation more precise with
-sck_keep_bits=41
and42
, which takes more running time to get enough data:The generalized prediction still holds. With the
-sck_randomize
option, we can see that we are beating "random" cache keys (except offsets still non-randomized) by a modest amount (roughly 20x less collision prone than random), which should make us reasonably comfortable even in "degenerate" cases:I've run other tests to validate other conditions behave as expected, never behaving "worse than random" unless we start chopping off structured data.