-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request/Discussion: Cache-based mutations rather than query-based #951
Comments
This was actually the first approach we took, see here: #320 I think there were some problems with the API, especially around working with IDs. (Note that this stuff all still works, but is rapidly on the path to being deprecated I think) My current idea is that we should have a way of injecting a store reducer, and then people can import any modules they like and use them on the store, the same way you have a reducer for a query. I was thinking it would be perhaps a "global reducer" and it would work in the same redux reducer way rather than a custom JavaScript method API. We could then ship some helpers like |
I think being able to customize the cache reducer would certainly do the trick, as long as the internal structure of the cache is pretty well documented. |
Yes, I think part of the goal is to make the internal cache structure part of the public API, so that people can write extensions like this. I think it might change a tiny bit in the coming month, but once we hit a 1.0 release it should be stable I think. It would be great if this global cache reducer also worked with optimistic UI out of the box. |
I second @dallonf's concerns. I am rewriting a tiny todo list application for React Native using react-apollo and this query based mutation model is already proving cumbersome even with this trivial example |
The problem with the suggested approach is that, while in simple cases (i.e. updating simple state of an object, removing an object) it makes sense, in other cases the semantics of what is included in a query response are inherently coupled to what the query actually is, and what the semantics of the API are. i.e. running a query with for a list of items between certain dates. Or a field that returns a different value depending on data state (eg. field In this case, the query reducer approach does actually make sense. But maybe having these reducers defined on mutations is backwards? Rather, perhaps these should be define at the |
I'm not sure I'm following your objection in your example. As a bonus, if you require the EDIT: Just thought of some more complexities here: A In my opinion, this is where smart cache invalidation comes into play. The |
@dallonf Say the semantics of the field in the API are such that
will return the On initial run, that query will be correct. Later, you run a mutation:
There is no way to know that, if Likewise, what happens if you As far as I am aware, there is no way for the engine to automatically determine these semantics, which would be required (AFAIK?) in the suggested approach defining the changes in terms of a "cache updater", rather than "query result updater" So the question comes to:
Presently, the approach with the reducers allow you to handle those semantics at the mutation level (i.e. (Thinking out loud... an alternative could be at the |
I'm still not quite following why it has to be query level rather than cache level. Maybe a code example... const cacheReducers = {
Shop: {
// Sort of a combination of a resolver and reducer
mostExpensiveProduct: (shop, args, prev, action) => {
if (action.type === 'APOLLO_MUTATION_RESULT' && action.result.updateProduct) {
if (!prev.cost || !action.result.updateProduct.cost) {
// Can't calculate the new result since either the query or mutation doesn't ask for `cost`
return invalidate();
// Alternative, `return prev` to avoid a network request and allow the cache to be stale
}
if (prev.cost < action.result.updateProduct.cost) {
// The updated product is more expensive, update the cache
return toIdValue(dataIdFromObject(action.result.updateProduct));
} else {
return prev;
}
} else {
return prev;
}
}
}
};
const client = new ApolloClient({
cacheReducers,
dataIdFromObject,
}); This would work, right? Not in every case (and so There are still some open (but imo solvable) problems with this approach, namely around deep values and aliases in queries... like, let's say the const query = gql`
{
shop {
mostExpensiveProduct(currency: USD) {
id
name
cost: {
dollars
euros
}
}
}
}`; or takes arguments (and worse, is aliased): const query = gql`
{
shop {
mostExpensiveProduct(currency: USD) {
id
name
dollarCost: cost(currency: USD)
euroCost: cost(currency: EURO)
}
}
}`; Even still, the information you need to perform a cache-level update exists in the cache. You'd just need a clean API for accessing it. |
Ah, I see what you're saying. Yeah, that would work. This is essentially the same Another question: How about a situation where a field's i.e.
Where This is all hypothetical, but just trying to understand the semantics of these field resolutions. Can it always be expected that a field on the same object id is consistent across different queries & arguments? Could the value potentially change depending on arguments/context from further further up the query? (And thus, the simple object cache by ID is not enough). How would the cache updaters/reducers know about these query semantics? I'm wondering if then it would make more sense to put these cache reducers instead in the queries themselves? My current thinking is that it makes more sense for the concept of "how this query's value might change" as a result of mutations to be tied in with the query itself, rather than expecting the mutations (or anything else for that matter) to understand the semantics of various queries. This fits in with the idea that some queries might not want their values to update at all on mutations... (which we can achieve with |
imo, if you have different values for an object with the same ID, any caching strategy is going to fall apart. But you're definitely making a strong case that queryReducers should continue to exist (are you aware that they already do? The docs seem to actually recommend it over The issue is just that, if they're the only option, they also make a lot of simple things hard (like list additions/deletions) and a few simple things impossible (like coming back to a list view after submitting a "create" form and having the new object already be there). I think a combination of cache-level reducers and query-level reducers (plus a sort of "I have no idea how to update this client side, just refetch"-style escape hatch) should be able to cover every use case nicely. |
Actually, I hadn't seen/used them before. So it seems like at the moment there are multiple different ways to go about this :-) I definitely agree having a standardised API for writing changes back into the the cache makes sense. For the |
There are definitely a lot of edge cases to consider, but I think the logical next step is to add a global reducer, which will make all the things you guys described possible (though not necessarily straightforward at first). The main blocker at the moment is that it will require a major refactor of how we dispatch actions and react to actions on the store. If we don't get it done before the end of the year, I'll make sure it happens in January. |
I just ran into this situation. Coming from Relay, where this kind of thing was never an issue, I was pulling my hair out for quite a bit before finding this issue. Has there been any progress? Or could somebody recommend how I could best proceed until then? Btw, thanks for an otherwise really well-designed library! It's much less magical than Relay, and I mean that in the best possible way :) |
@skosch what we're doing nowadays is following the pattern of the Query itself should know on what situations it should update it's data (i.e. what mutations it wants to adjust on). We've made wrapper around Apollo for queries that allows us to simply define a map of In the case of your list view, you need to define something like
|
Thanks for responding! |
@armstrjare The issue I've always run into is that in the Create Page / List Page scenario, the List Page query isn't active when the mutation happens (because the Create Page is active instead), so it won't update even if you configure its reducer. Or am I doing something wrong? @skosch I agree, that (and the reason I gave above) is why I'm pushing for customizing the cache's awareness of mutations rather than individual queries. |
It's funny, one of the main reasons I switched from Relay was that Relay often made it impossible for mutations to make fine-grained manual updates to the cache, even when using connections, so you'd always have to re-request the whole list. I had fully assumed Apollo would make this trivial since it's based on Redux. For now I'll gladly take the option of defining a clunky global reducer. Convenient shortcuts can come later. |
@dallonf ah, good point. Doesn't apply the reducers if the query is resolved from the cache. Note that it does apply the reducers correctly if the query is active, but you change the query variables and it loads from the cache. (i.e. for paging). |
Isn't this solved by the new
|
I'm currently starting to implement Apollo Client on a project at work, and I notice it all kind of falls apart around mutations since updating results is coupled to specific queries. This seems... all sorts of bad to me, because it means that somebody writing or modifying a query needs to know about all the possible mutations that could affect it, and somebody writing a mutation needs to know about all the queries it could possibly affect.
My personal mental model of how this should work is based on the cache - I think when writing a mutation, I should be able to specify how the result affects the cache, and crucially, if there's no practical way to calculate this client-side, which parts of the cache are invalidated by the mutation. (Queries watching invalidated parts of the cache would automatically re-fetch)
Maybe something like:
Just want to (re)start the discussion. I'm guessing there's been a lot of talk and iteration around this in the past!
The text was updated successfully, but these errors were encountered: