Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New stable, fixed-length cache keys #9126

Closed
wants to merge 23 commits into from

Conversation

pdillinger
Copy link
Contributor

@pdillinger pdillinger commented Nov 4, 2021

Summary: This change standardizes on a new 16-byte cache key format for
block cache (incl compressed and secondary) and persistent cache (but
not table cache and row cache).

The goal is a really fast cache key with practically ideal stability and
uniqueness properties without external dependencies (e.g. from FileSystem).
A fixed key size of 16 bytes should enable future optimizations to the
concurrent hash table for block cache, which is a heavy CPU user /
bottleneck, but there appears to be measurable performance improvement
even with no changes to LRUCache.

This change replaces a lot of disjointed and ugly code handling cache
keys with calls to a simple, clean new internal API (cache_key.h).
(Preserving the old cache key logic under an option would be very ugly
and likely negate the performance gain of the new approach. Complete
replacement carries some inherent risk, but I think that's acceptable
with sufficient analysis and testing.)

The scheme for encoding new cache keys is complicated but explained
in cache_key.cc.

Also: EndianSwapValue is moved to math.h to be next to other bit
operations. (Explains some new include "math.h".) ReverseBits operation
added and unit tests added to hash_test for both.

Fixes #7405 (presuming a root cause)

Test Plan:

Basic correctness

Several tests needed updates to work with the new functionality, mostly
because we are no longer relying on filesystem for stable cache keys
so table builders & readers need more context info to agree on cache
keys. This functionality is so core, a huge number of existing tests
exercise the cache key functionality.

Performance

Create db with
TEST_TMPDIR=/dev/shm ./db_bench -bloom_bits=10 -benchmarks=fillrandom -num=3000000 -partition_index_and_filters
And test performance with
TEST_TMPDIR=/dev/shm ./db_bench -readonly -use_existing_db -bloom_bits=10 -benchmarks=readrandom -num=3000000 -duration=30 -cache_index_and_filter_blocks -cache_size=250000 -threads=4
using DEBUG_LEVEL=0 and simultaneous before & after runs.
Before ops/sec, avg over 100 runs: 121924
After ops/sec, avg over 100 runs: 125385 (+2.8%)

Collision probability

I have built a tool, ./cache_bench -stress_cache_key to broadly simulate host-wide cache activity
over many months, by making some pessimistic simplifying assumptions:

  • Every generated file has a cache entry for every byte offset in the file (contiguous range of cache keys)
  • All of every file is cached for its entire lifetime

We use a simple table with skewed address assignment and replacement on address collision
to simulate files coming & going, with quite a variance (super-Poisson) in ages. Some output
with ./cache_bench -stress_cache_key -sck_keep_bits=40:

Total cache or DBs size: 32TiB  Writing 925.926 MiB/s or 76.2939TiB/day
Multiply by 9.22337e+18 to correct for simulation losses (but still assume whole file cached)

These come from default settings of 2.5M files per day of 32 MB each, and
-sck_keep_bits=40 means that to represent a single file, we are only keeping 40 bits of
the 128-bit cache key. With file size of 2**25 contiguous keys (pessimistic), our simulation
is about 2**(128-40-25) or about 9 billion billion times more prone to collision than reality.

More default assumptions, relatively pessimistic:

  • 100 DBs in same process (doesn't matter much)
  • Re-open DB in same process (new session ID related to old session ID) on average
    every 100 files generated
  • Restart process (all new session IDs unrelated to old) 24 times per day

After enough data, we get a result at the end:

(keep 40 bits)  17 collisions after 2 x 90 days, est 10.5882 days between (9.76592e+19 corrected)

If we believe the (pessimistic) simulation and the mathematical generalization, we would need to run a billion machines all for 97 billion days to expect a cache key collision. To help verify that our generalization ("corrected") is robust, we can make our simulation more precise with -sck_keep_bits=41 and 42, which takes more running time to get enough data:

(keep 41 bits)  16 collisions after 4 x 90 days, est 22.5 days between (1.03763e+20 corrected)
(keep 42 bits)  19 collisions after 10 x 90 days, est 47.3684 days between (1.09224e+20 corrected) 

The generalized prediction still holds. With the -sck_randomize option, we can see that we are beating "random" cache keys (except offsets still non-randomized) by a modest amount (roughly 20x less collision prone than random), which should make us reasonably comfortable even in "degenerate" cases:

197 collisions after 1 x 90 days, est 0.456853 days between (4.21372e+18 corrected)

I've run other tests to validate other conditions behave as expected, never behaving "worse than random" unless we start chopping off structured data.

Summary: This change standardizes on a new 16-byte cache key format for
block cache (incl compressed and secondary) and persistent cache (but
not table cache and row cache).

The goal is a really fast cache key with practically ideal stability and
uniqueness properties. A fixed key size of 16 bytes should enable future
optimizations to the concurrent hash table for block cache, which is a
heavy CPU user / bottleneck.

TODO: more

Test Plan: TODO
@pdillinger pdillinger changed the title New stable, fixed-length cache keys [RFC] New stable, fixed-length cache keys Nov 5, 2021
@pdillinger pdillinger marked this pull request as ready for review November 5, 2021 17:42
@zhichao-cao
Copy link
Contributor

The QPS improvement is explicit. Looks like the legacy cache key generation really causes kind of heavy CPU overhead.

// max_offset is based on file size (see WithOffset) and is required here to
// choose an appropriate (sub-)encoding. This constructor never generates an
// "empty" base key.
OffsetableCacheKey(const std::string &db_id, const std::string &db_session_id,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically, the OffsetableCacheKey is used per SST file. For different SST file, we need to have different OffsetableCacheKey instance?

Copy link
Contributor Author

@pdillinger pdillinger Nov 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, one instance for each SST file. So even though there's some hashing and digit parsing for building OffsetableCacheKey, this only happens on table file open. The WithOffset step is really fast.

//
// This class "is a" CacheKey only privately so that it is not misused as
// a ready-to-use CacheKey.
class OffsetableCacheKey : private CacheKey {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we close and reopen an DB, even block cache and secondary cache has no change, the cache key for the same block in the same SST file will be different right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the newer SST file case, it is the same since we use the session, file, and db_id from the file property. But if it is blocks from existing SST file, we will not be able to find them in both cache.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be stable on newer SST files. See BlockBasedTable::SetupBaseCacheKey.

// value because of the case of VersionSet::Recover opening some table
// files and later setting the DB ID. So we just rely on uniqueness
// level provided by session ID.
db_id = "unknown";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we make "unknown" as a static extern? Maybe it will be used in more places

rep_->compressed_cache_key_prefix_size, handle,
compressed_cache_key);
}
key_data = GetCacheKey(rep_->base_cache_key, handle);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add a unit test to check the current scheme handles both compressed and regular key in a desired way?

Copy link
Contributor

@zhichao-cao zhichao-cao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, the PR looks good to me. Just left 2 minor comments.

// CacheKeys are currently used by calling AsSlice() to pass as a key to
// Cache. For performance, the keys are endianness-dependent (though otherwise
// portable). (Persistable cache entries are not intended to cross platforms.)
class CacheKey {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this be more efficient to move the two fields in a struct and have the Slice be created once (rather than on every call)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are generally calling a virtual function expecting Slice& (not Slice), your suggestion is probably more efficient. 👍

I was probably thinking ahead to a future when block cache can take CacheKey& as key. :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, on construction, we usually use the CacheKey once. If you want to use it more than once, you can save the Slice.

// max_offset is based on file size (see WithOffset) and is required here to
// choose an appropriate (sub-)encoding. This constructor never generates an
// "empty" base key.
OffsetableCacheKey(const std::string &db_id, const std::string &db_session_id,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For table keys, should they also contain the "compression type" (in addition to the offset)? What happens if the block_cache == block_cache_compressed -- wouldn't both the compressed and uncompressed keys map to the same key and chaos would ensue? I do not think there is anything preventing both types of cache from being the same.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good observation. I guess I didn't think of that because it would be such a bad configuration under the current caching infrastructure. (Compressed cache entries would be completely useless and crowd out uncompressed entries, because uncompressed entries would get any hits first. Compressed entries guaranteed ejected before (or immediately adjacent to) corresponding uncompressed entry.)

Long term, I would like to see the Cache 'typed' so that the type of entry is sufficient to ensure logical uniqueness.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, this is a pre-existing issue and this change does not make it better or worse. I've posted #9172 for protecting against the problem though.

@pdillinger
Copy link
Contributor Author

My main concern with this design is the disconnect between cache keys and SST unique IDs (still marked "experimental"). Although both are based on session id, db id, and file number, this cache key "squeezes together" the session id and file number to ensure space in 128 bits for the offset within the file. In extreme cases, this "squeezing together" can harm the uniqueness to where it's not really using the full power of the 128 bits. The cache key encoding is also dependent on file size (in extreme cases), which is not available in TableProperties, which is a convenient precursor for getting SST unique id.

Possible solutions:

  • A more clever encoding encoding of the data that works well as both unique ID and as base cache key.
  • Instead of awkwardly combining session id and file number, use a SemiStructuredUniqueIdGen to assign 128-bit unique ids to generated files. Such a value would be ideal[*] both as file unique ID and as base cache key. But this would be YET ANOTHER piece of metadata associated with an SST file. (If we store this unique ID in the Footer, it could also be used for base context checksum in format_version=6 and context-aware block checksums #9058, when cut down to 32 bits.)

[*] Assuming no coordination between processes running RocksDB is relied upon. See https://github.com/pdillinger/unique_id

@pdillinger
Copy link
Contributor Author

A more clever encoding encoding of the data that works well as both unique ID and as base cache key.

But I seem to be running up against the limits of clever, and clever carries complexity.

@pdillinger
Copy link
Contributor Author

use a SemiStructuredUniqueIdGen to assign 128-bit unique ids to generated files. Such a value would be ideal

... except it would mean that in order to have a distinctive set of cache key prefixes for a DB, we would need a prefix for each SST file rather than (approximately) one prefix per original session ID (among live files). This is probably not a big deal because cache dumping should be a rare operation, and we could always represent the set with a Bloom(-like) filter if we want extra space & time savings.

@pdillinger
Copy link
Contributor Author

I just modified / simplified to a fixed 24-byte cache key, and performance was worse than baseline: 76615.1 ops/sec vs. 79015.6

@pdillinger pdillinger changed the title [RFC] New stable, fixed-length cache keys New stable, fixed-length cache keys Dec 16, 2021
@facebook-github-bot
Copy link
Contributor

@pdillinger has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

DEFINE_bool(stress_cache_key, false, "If true, run cache key stress test instead");
DEFINE_uint32(sck_files_per_day, 2500000,
"(-stress_cache_key) Simulated files generated per day");
DEFINE_uint32(sck_duration, 90,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you call this something other than "duration"? Duration in db_bench means how long to run the tests whereas here it means how many days to simulate. "sck_days" or something like that might be a better name.

sck_db_count, 100,
"(-stress_cache_key) Parallel DBs in operation");
DEFINE_uint32(sck_table_bits, 20,
"(-stress_cache_key) Log2 number of tracked files");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide a better description. I am not sure what "table_bits" has to do with "number of tracked files".

"(-stress_cache_key) Log2 number of tracked files");
DEFINE_uint32(
sck_keep_bits, 50,
"(-stress_cache_key) Number of cache key bits to keep");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide a better description. I am not sure what "number of bits to keep" is supposed to mean.

Comment on lines 679 to 682
stripped = GetSliceHash64(ck.AsSlice()) >> shift_away;
} else {
uint32_t a = DecodeFixed32(ck.AsSlice().data()) << shift_away_a;
uint32_t b = DecodeFixed32(ck.AsSlice().data() + 12) >> shift_away_b;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be better to have a Slice cks = ck.AsSlice()? Wondering the overhead of creating/using extra slices.

} else {
uint32_t a = DecodeFixed32(ck.AsSlice().data()) << shift_away_a;
uint32_t b = DecodeFixed32(ck.AsSlice().data() + 12) >> shift_away_b;
stripped = (uint64_t{a} << 32) + b;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a hashing method that combines two values we should be using instead?

}
if (stripped == 0) {
// Unlikely, but we need to exclude tracking this value
printf("Hit Zero! \n");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be to stderr instead? Wondering if it will get lost if it is hit...

ResetProcess();
} else {
// Replace
table_[pos] = stripped;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this just guarantee that two successive hashes to the same pos are not the same? Is it possible that there is an intervening one that would cause something like [A, B, A] where the collision was not detected?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is simulation of various files and their cache entries kept for various lifetimes. There is no collision in practice if there's no overlap in lifetime (on disk UNION in cache).

Comment on lines +642 to +644
uint32_t shift_away = 64 - FLAGS_sck_keep_bits;
uint32_t shift_away_b = shift_away / 3;
uint32_t shift_away_a = shift_away - shift_away_b;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a comment here to explain the math. Not sure what the shift_away / 3 is for.

uint32_t report_count = 0;
uint32_t collisions_this_run = 0;
// Round robin through DBs
for (size_t db_i = 0;; ++db_i) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to not also have this for conditional/update file_count?

@pdillinger
Copy link
Contributor Author

@mrambacher My plan is to land this to get it going through downstream testing, with the simulation in a somewhat dirty state, and follow up with some improvements to comments, more unit tests, etc. Thanks for the input.

pdillinger added a commit to pdillinger/rocksdb that referenced this pull request Dec 17, 2021
facebook-github-bot pushed a commit that referenced this pull request Dec 17, 2021
Summary:
After #9126

Pull Request resolved: #9309

Test Plan: CI

Reviewed By: ajkr

Differential Revision: D33188902

Pulled By: pdillinger

fbshipit-source-id: 54bf34e33c2b30b1b8dc2a0229e84c194321b606
facebook-github-bot pushed a commit that referenced this pull request Feb 4, 2022
Summary:
Follow-up to #9126

Added new unit tests to validate some of the claims of guaranteed uniqueness
within certain large bounds.

Also cleaned up the cache_bench -stress-cache-key tool with better comments
and description.

Pull Request resolved: #9329

Test Plan: no changes to production code

Reviewed By: mrambacher

Differential Revision: D33269328

Pulled By: pdillinger

fbshipit-source-id: 3a2b684a6b2b15f79dc872e563e3d16563be26de
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

SST files overlap at a level and keys are not ordered in a SST file
4 participants