NATS feed caching options #5286
Replies: 2 comments
-
But assuming your lookup key is The benefit of this is also that as you grow you can build a cluster of servers or even a geo distributed super cluster. If you use JetStream to MIRROR the cache bucket to all your regions then automatically with no code changes the nearest (nats network latency wise) cache will be used for reads. Since the DIRECT GET API is mirror aware and will read from nearest bucket. You can, using leafnodes, even create local-to-the-node caches like this for a real big distributed cache so you can scale out the reads. |
Beta Was this translation helpful? Give feedback.
-
I dont have any insights about how to best use NATS for this, but it seems like you could benefit from reading about how existing platforms implement things like newsfeeds. Its a very well-trodden path. Just search for things like newsfeed system design, fan out on read/write, etc... You could probably process each post upon creation and store it in a KV subject. Then also create user-specific feeds that contain post IDs that each user is eligible for. Then each user just subscribes to changes in their feed and fetches the new posts. Something like that. I hope this helps |
Beta Was this translation helpful? Give feedback.
-
We want to improve our user/community feed system with some caching and are considering 2 options.
This is the structure of our feed table: itemId, userId, postId, eventId, eventType, communityId, timestamp.
And the json response will look something like:
We don't want to fetch the feed from the database each time a user requests the feed since multiple join are used to fetch the post, event, user, etc. So we want to store it in the NATS kV cache somehow and directly access and return it to the user. In the coming months 15k-20k users will join our platform. So a robust, future proof and not that hard to implement and main solution will be preferred :)
So we thought of these options:
Cache latest 30 items in NATS KV as feed.communityid.0, latest 31-60 in feed.communityId.1 . This means updating feed.N on each feed item edit/delete etc.
Push all feed items to feed.communityId.itemId as regular messages and fetch latest 30 messages with pull push fetch() function? Is this fast enough? This does not allow paging/offset right?
Cache just the feed ids in the feed.communityId.feedItems KV. And store each feed item in the feed.community.items.itemId KV. Then for the API request we fetch the feedItems KV, and then for each in the feedItems KV we fetch the item from the feed.community.items.itemId KV. So a looped fetch. Probably not the best solution imo.
Which if these is the more go-to solution, or are we missing other options?
If some one knows any other caching solution which can be solved using NATS please let me know!
Beta Was this translation helpful? Give feedback.
All reactions