Query batching - a nice feature to have? #5860
Unanswered
dwjohnston
asked this question in
Ideas
Replies: 1 comment
-
if your API supports batching, this is a good solution based on |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Here's the concept.
Say you have a todo item that consists of normalised properties like:
And you're going to fetch a list of todos, and then display them as a table, and you need to denormalise the properties, to retrieve the real name, by fetching
/users/user-1
,/projects/project-abc
etc.Now the problem is, say you have 25 items on the list, and each one has a unique assignee, project, tag then that's potentially 75 fetch requests you'll make simultaneously.
Of course, we want to try retrieve this data via TanStack Query - so if it's already cached we don't need to make the fetch request.
Now if the API doesn't support batching, then there's not much we can do.
But possibly, our API supports batching the retrieval of resources, for example something like
GET /users?userId=user-1,user-2,user-3
.In this case, the suggestion is that TanStack Query has the ability to detect when multiple requests are made for the same query key type, and batches them.
Example usage might look like:
Maybe I'm over thinking things - but has this kind of problem been encountered?
Beta Was this translation helpful? Give feedback.
All reactions