Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Client-side caching #1281

Closed
mp911de opened this issue Apr 30, 2020 · 6 comments
Closed

Add support for Client-side caching #1281

mp911de opened this issue Apr 30, 2020 · 6 comments
Labels
type: feature A new feature
Milestone

Comments

@mp911de
Copy link
Collaborator

mp911de commented Apr 30, 2020

See https://redis.io/topics/client-side-caching

@mp911de mp911de added the type: feature A new feature label Apr 30, 2020
@mp911de
Copy link
Collaborator Author

mp911de commented May 5, 2020

Formats for the invalidation message are different between RESP2 and RESP3:

RESP2:

*3
$7
message
$20
__redis__:invalidate
*1
$3
key

RESP3:

>2
$10
invalidate
*1
$3
key

@mp911de
Copy link
Collaborator Author

mp911de commented May 5, 2020

Depends on #1284 1284

@mp911de mp911de added this to the 6.0 RC1 milestone May 5, 2020
@mp911de
Copy link
Collaborator Author

mp911de commented Jun 19, 2020

Example for a server-side assisted client-side Cache API:

// the client-side cache
Map<String, String> clientCache = new ConcurrentHashMap<>();

// prepare our connection and another party
StatefulRedisConnection<String, String> otherParty = redisClient.connect();
RedisCommands<String, String> commands = otherParty.sync();

StatefulRedisConnection<String, String> connection = redisClient.connect();

// Create cache-frontend through which we're going to access the cache
CacheFrontend<String, String> frontend = ClientSideCaching.enable(CacheAccessor.forMap(clientCache), connection,
        TrackingArgs.Builder.enabled());

// make sure value exists in Redis
// client-side cache is empty
commands.set(key, value);

// Read-through into Redis
String cachedValue = frontend.get(key);
assertThat(cachedValue).isNotNull();

// client-side cache holds the same value
assertThat(clientCache).hasSize(1);

// now, the key expires
commands.pexpire(key, 1);

// a while later
Thread.sleep(200);

// the expiration reflects in the client-side cache
assertThat(clientCache).isEmpty();

@mp911de
Copy link
Collaborator Author

mp911de commented Jun 19, 2020

Since Lettuce 6 is a RESP3-capable client, we don't need to route caching notifications through a separate connection.

mp911de added a commit that referenced this issue Jun 19, 2020
// the client-side cache
Map<String, String> clientCache = new ConcurrentHashMap<>();

// prepare our connection and another party
StatefulRedisConnection<String, String> otherParty = redisClient.connect();
RedisCommands<String, String> commands = otherParty.sync();

StatefulRedisConnection<String, String> connection = redisClient.connect();

// Create cache-frontend through which we're going to access the cache
CacheFrontend<String, String> frontend = ClientSideCaching.enable(CacheAccessor.forMap(clientCache), connection,
        TrackingArgs.Builder.enabled());

// make sure value exists in Redis
// client-side cache is empty
commands.set(key, value);

// Read-through into Redis
String cachedValue = frontend.get(key);
assertThat(cachedValue).isNotNull();

// client-side cache holds the same value
assertThat(clientCache).hasSize(1);

// now, the key expires
commands.pexpire(key, 1);

// a while later
Thread.sleep(200);

// the expiration reflects in the client-side cache
assertThat(clientCache).isEmpty();
mp911de added a commit that referenced this issue Jul 1, 2020
Move API from test into io.lettuce.core.support.caching
@mp911de mp911de closed this as completed Jul 1, 2020
@vincentjames501
Copy link

This is super awesome @mp911de ! The release notes mention it isn't compatible with Master/Replica or Clustering. Given a lot of enterprise customers use more than a single standalone setup, are there any plans to support it via Pub/Sub or any way to extend RESP3 to support Master/Replica or Clustering? Sorry if this is a silly question but https://redis.io/topics/client-side-caching doesn't mention anything about limitations/gotchas with Master/Replica or Clustering.

@mp911de
Copy link
Collaborator Author

mp911de commented Aug 5, 2020

RESP3 is generally available for all kinds of operation modes.

With Master/Replica, it could work as we have no sharding and always a master relation. For Redis Cluster, we need to ensure that invalidation signals are consumed only from master nodes and that each connection must be activated for client tracking.

There's likely no need to use Pub/Sub as Redis 6 is enabled for RESP3 anyway. Pub/Sub mode was introduced to allow Client-side caching using clients that aren't capable of using RESP3.

Redis docs quite often don't mention limitations or caveats as they rarely consider the client-side. Basically all failover cases aren't covered by the docs. Lettuce for instance auto-reconnects a connection if the connection was disconnected or in HA arrangements, when a failover happened.

In both cases, the server side loses its tracking state which means that after a reconnect the keys that were previously obtained are not known for tracking on the reconnected connection. While we can enable broadcast mode to still keep track of these keys, it's not ideal and leaves us with the question whether to evict the near-cache entirely to synchronize with the Redis state. Probably not ideal, to evict the near cache.

Sharding isn't a limitation per-se, it just requires propagation of client tracking settings. That isn't in place for Lettuce yet. Redis Cluster connections are created lazily so we need to apply settings from a previously issued CLIENT TRACKING command (or commands, not sure yet).

In any case, can you please file a new ticket to investigate for Client-side caching using Redis Cluster and Master/Replica?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: feature A new feature
Projects
None yet
Development

No branches or pull requests

2 participants