-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Best practice when a mutation deletes an object #899
Comments
Related StackOverflow question: http://stackoverflow.com/questions/40484900/how-do-i-handle-deletes-in-react-apollo/40505977 |
@yurigenin Right now |
@yurigenin I think at the moment, you'd need to remove them manually, but I may just have missed that functionality when reading through the code |
@yurigenin right now there's no cache expiration. Deleting objects manually can be a bit dangerous, because you need to make sure that no other query is using them. A few people have already asked for this, so here's a discussion around cache expiration: #825 |
It might help to have an example of the updateQueries best practice. Anyone here able to share it, it would be great to have in the documentation, because it is part of any CRUD app :). |
Yeah, also faced up with this and have no idea how to correctly implement this behavior |
I think the easiest and most flexible way is to use the reducer function. At least I got it working quite easily using that. Still maybe a bit too much boilerplate for such a common action, but at least it solves it for me. |
Can someone provide example code of how to use reducer function to remove store data? |
Already posted, but check this link http://stackoverflow.com/questions/40484900/how-do-i-handle-deletes-in-react-apollo?answertab=active#tab-top I also use
Using the following query
This is my
Side question: is there an easier way to find indexes? |
The problem is that this will not delete the brand itself from the store... It only updates the list. |
Here my version as a reducer. If you use proper ids, you get updates for free, but you need to handle the case of addition and removal from lists. const withContentItem = graphql(gql`
[YOUR QUERY]
`, {
options: props => ({
variables: { ... },
reducer: (state, action) => {
// insert
if (
action.type === 'APOLLO_MUTATION_RESULT' /* is it a mutation? */
&& action.operationName === 'mutateElement' /* mutate element? */
&& action.result.data.mutateElement.contentItemId === state.contentItem.id /* operate on right content id? */
&& !state.contentItem.elements.find((el) => el.id === action.result.data.mutateElement.id) /* not already in list */
) {
return update(state, {
contentItem: {
elements: {
$push: [action.result.data.mutateElement] /* add to end of list */
}
}
})
}
// delete
if (
action.type === 'APOLLO_MUTATION_RESULT'
&& action.operationName === 'deleteElement'
&& action.result.data.deleteElement.contentItemId === state.contentItem.id /* operate on right content id? */
) {
return update(state, {
contentItem: {
elements: {
$splice: [[state.contentItem.elements.findIndex(el => el.id === action.result.data.deleteElement.id), 1]]
}
}
})
}
return state
}
}),
}) |
We are missing one big use case when frontend can not determine results of what queries and how were changed after deletion (same applies for creating new entities). Let's say I queried |
@nosovsh You can invalidate the entire cache, for everything, on a delete. That's what I ended up going with as a solution, though in my application deletes are a rarity anyways (i.e. perhaps once a week for a user at most). Yes, it would be really nice if there was a way to invalidate a single item using the object ID the store is using to keep track of it, across all possible queries that would be including it. I tried to suss out exactly how the application is keying the cache for objects, and why all the solutions seem to couple it with the query (it must not be simple key/value, or these suggested workarounds would be total overkill), but after about an hour of digging into the code, I eventually just decided to go with the nuke option. A clear explanation of the store and cache invalidation strategies would be quite welcome - as someone that isn't particularly familiar with Relay, I feel like most of the examples/discussion/documentation on the Apollo internal cache assumes a strong familiarity with Relay's store, and how it's plugged into Apollo. For example, it wasn't clear at all to me if I zap a cached item from one query, if it zaps it from other queries that would return it, or if on deletion I need to essentially loop over every possible query that would return it and check if it's in the cache, and if so, zap it for each query. And if zapping it once zaps it everywhere, I'm again confused why I can't just zap it directly using the objectID generated to track it. |
@jamiter problem with |
@nosovsh, I think you mentioned me by accident. |
oops, I meant @jlovison |
(Please tell me if my question would be better asked in SO instead.) I was using Plus, I thought migrating to the new Thanks! |
@renato with the What's the reason you want to delete from the cache and how important is that feature to you? |
@helfer I'm sorry, I'm probably missing something. I don't really need to "delete from the cache" but this is the way that worked so far for me when deleting something from the view. I've a list of items in my React view injected by Apollo. When I execute a mutation that deletes one item how do I remove that specific item from my view? So far I've been deleting it from the cache using the However, the last query I'm working on has a month parameter and I can navigate through months so they become cached by Apollo. I've a mutation that deletes some items that affects multiple months, not only the currently active parameter. The If you don't delete from the cache in these situations, what is the solution? Just invalidate the cache or refetch? |
To synchronise my view model (mobx) with my model store (apollo) |
Based on the blog post Apollo Client’s new imperative store API, here's how I setup delete mutations:
|
I am confused. Most of the sample code I see on this thread, seems to deal with running a delete mutation from a single query. What if I actually have multiple queries referencing that object? I thought the whole point of Apollo tracking refs ie Type-X was to handle such a use case. For example say user deletes User-1. User-1 is referenced by multiple queries say queryA and queryB. Based on what I see in this thread, it seems it would be the responsibility of the component update to correctly update the store for each of the referencing queries?? I hope I a missing the point here as this would be a maintenance nightmare! So getting back to the original question: What is the best way to handle this use case ie delete the reference and have apollo remove the reference from ALL queries tracking that instance? |
@derailed a few thoughts:
|
Hi Chris,
Thank you for the reply! I am at loss here. I have a bunch of queries
falling in this use case ie referencing a user
object. I am indeed using the extension and it seems to report the
correct behavior per the implementation ie the user object is deleted from
the query but still shows on the left handside under the store tab aka
`Apollo Client State, which leaves me to believe that user is still in the
store. So as you can see my update cb currently deletes the user from the
current query and not the actual store. Not sure how do actually to this??
As you can see here 3 other queries references that user and need to be
updated. I haven't yet found a way to deal with that elegantly besides
having to refetch to make sure that user is now gone in related queries.
That said my update scenarios work ie on an update mutation the user
changes are correctly propagated which leads me to believe the store is
actually wired correctly?
My expectations here is during a delete mutation all the queries having
the referenced user object should be "automatically" deleted without having
to manually update all related queries as the reference is no longer
active. Per your #2 point it does not seems to be handled by Apollo that
way. Hence to currently get the right effect and short of either reseting
or refetching one is left with having to know all the potential queries
that might be affected by that deletion.
Does this sound right, or am I missing it?
Thank you for you clarifications!!
Here is what I have in my code when a user is deleted:
this.apollo.mutate({
mutation: DeleteUser,
variables: deleteInputs(this.user.id),
// BOZO!! Lame!
refetchQueries: [{ query: Facilities }, { query: Accounts }, { query:
OrphanUsers }],
optimisticResponse: optimisticDelete(id),
update: (store: any, data: any) => {
const cache = store.readQuery({ query: PvdmUsers });
store.writeQuery({
query: PvdmUsers,
data: {
pvdmUsers: _.filter(cache.pvdmUsers, (u: UserFragment): boolean => { return
u.id != id })
}
});
}
})
.toPromise()
.then(() => this.back())
.catch(error => this.error = error.message);
…On Thu, Jun 15, 2017 at 7:54 AM, Chris Guidry ***@***.***> wrote:
@derailed <https://github.com/derailed> a few thoughts:
1. The store is only going to save the user as one item in the cache
so even if multiple queries are returning the user, it should only need to
be removed once. If you haven't already, try out the Chrome Apollo
extension, which will allow you to see how the store is updated post
mutation.
2. If there are items in the store related to the user such as
BlogUser, ToDoUser, etc., those would most likely need to be explicitly
removed from the store within the "update" section of the mutation.
3. Generally, mutations are only going to impact a small number of
tables, in which case the mutation update of the store makes sense.
However, if a particular mutation is going to have wide ranging affects,
you could consider resetting the entire store with client.resetStore()
<http://dev.apollodata.com/react/auth.html#login-logout>. It's
documented as part of the login/logout functionality, but it can be used
elsewhere.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#899 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAP3CStYjjN1sbUviGxJ_NQDJP8xbv-ks5sETeZgaJpZM4KwUgp>
.
|
@derailed - in the Apollo Chrome Extension, under the Store tab, the User is not removed after a delete mutation? If you post the code for the delete mutation, then I can give feedback on it. |
@chris-guidry - Tx Chris! I believe, the user is correctly removed from the query but not from the store. I think I am So I think my confusion here is I expect the delete to be propagated to the other queries since Here is the code this.apollo.mutate({
mutation: DeleteUser,
variables: deleteInputs(this.user.id),
// BOZO!! Lame!
refetchQueries: [{ query: Facilities }, { query: Accounts }, { query: OrphanUsers }],
optimisticResponse: optimisticDelete(id),
update: (store: any, data: any) => {
const cache = store.readQuery({ query: PvdmUsers });
store.writeQuery({
query: PvdmUsers,
data: {
pvdmUsers: _.filter(cache.pvdmUsers, (u: UserFragment): boolean => { return u.id != id })
}
});
}
})
.toPromise()
.then(() => this.back())
.catch(error => this.error = error.message); |
@derailed You're right that the User is still in the store, but isn't given back when asking for the |
Certainly performing a mutation and then expecting that to be able to correctly update all active queries is ...ambitious. I'm not sure how many cases it works for, but I think using soft deletes might be a solution for some people. Imagine a user:
and maybe a message
Then John Doe wants to delete his account so return a mutation with a result e.g.
Then it's up to components displaying users to decide if they want to check if This problem is not limited to deletes, imagine an insert which affects the results of many queries as well. If you are using react and find yourself in the situation where you are performing a mutation and you know that this is going to possibly affect a number of active queries, on many components, then you may decide to move the network level fetching out of the HOC wrapping your components to somewhere else, and have your HOC have a fetchPolicy of 'cache-only'. In this case you need to perform the fetching logic somewhere else, good candidates for this are redux-observable or redux-saga. |
This is how I handled the use case of removing an obj from multiple caches and unshifting it onto multiple other caches. the mutation... import {
mutationErrorHandler,
mutationSuccessHandler,
unshiftItemToQuery,
removeFromRouteBasedStatusQuery
} from "../../util/mutationHelpers";
const self = "estimateItem";
const selfStatus = "estimated";
const wasSuccessful = mutationResult => {
const status = mutationResult.data[self].status);
return status === selfStatus;
};
export const ESTIMATE_ITEM_MUTATION = graphql(ESTIMATE_ITEM_GQL, {
props: ({ mutate, ownProps }) => ({
estimateItem: variables => {
return mutate({
variables,
updateQueries: {
Item: (prev, { mutationResult }) => {
if (prev.item.id && prev.item.id !== mutationResult.data[self].id)
return prev;
return update(prev, {
item: {
status: { $set: mutationResult.data[self].status },
estimated_at: { $set: mutationResult.data[self].estimated_at },
estimated: { $set: mutationResult.data[self].estimated },
last_updated: { $set: mutationResult.data[self].last_updated }
}
});
},
EmployeeItems: (prev, data) =>
removeFromRouteBasedStatusQuery(
prev,
data,
"EmployeeItems",
wasSuccessful,
variables,
ownProps
),
QueueItems: (prev, data) =>
removeFromRouteBasedStatusQuery(
prev,
data,
"QueueItems",
wasSuccessful,
variables,
ownProps
),
AllItems: (prev, data) =>
removeFromRouteBasedStatusQuery(
prev,
data,
"AllItems",
wasSuccessful,
variables,
ownProps
)
},
update: (proxy, mutationResult) => {
// Employee Items
unshiftItemToQuery(proxy, {
mutationResult,
query: EMPLOYEE_ITEMS_QUERY,
variables: rootQueryVariables("myItems", {
employee_id: variables.employee_id || ownProps.employee_id
}),
ownProps,
which: "myItems",
self
});
// All Items
unshiftItemToQuery(proxy, {
mutationResult,
query: ALL_ITEMS_QUERY,
variables: rootQueryVariables("allItems"),
ownProps,
which: "allItems",
self
});
}
}).then(
mutationResult =>
mutationSuccessHandler(
mutationResult,
wasSuccessful,
"Item Estimated",
ownProps
),
mutationErrorHandler
);
}
})
}); Used in Update Queries... export const removeItemFromItemsList = (
previousResult,
mutationResult,
objId
) => {
if (!previousResult) {
console.log("no previous query result");
return;
}
const index = _.findIndex(previousResult.items.data, v => v.id === objId);
if (index < 0) {
console.log("item not found in previous query result");
return;
}
return update(previousResult, {
items: {
data: { $splice: [[index, 1]] }
}
});
};
export const removeFromRouteBasedStatusQuery = (
prev,
data,
opName,
wasSuccessful,
variables,
ownProps
) => {
const { mutationResult, queryName } = data;
if (
!ownProps.isRouteBasedQuery ||
ownProps.routeBasedQueryType !== "status" ||
!wasSuccessful(mutationResult) ||
queryName !== opName
) {
return prev;
}
return removeItemFromItemsList(prev, mutationResult, variables.id);
}; Used in update export const unshiftItemToQuery = (
proxy,
{ mutationResult, query, variables = {}, ownProps, which, self }
) => {
try {
/**
* Do not unshift on the currently viewed query, it will get updated, in place by default Apollo Behavior
*/
if (ownProps.currentRoute === which) {
return;
}
/**
* Stop writes to EmployeeQuery if it is not this users item
*/
if (which === "myItems") {
if (skipMyItemsUpdate({ variables, ownProps })) return;
// Add this to account for mutation updates that do not provide employee_variables
if (!variables.employee_id)
variables.employee_id = ownProps.employee_id;
}
const result = mutationResult.data[self];
if (!result) {
console.log(`Could not get data.${self} result from %o`, mutationResult);
return;
}
const data = proxy.readQuery({ query, variables });
if (!data) {
console.log(
`Could not find ${which} query to update for ${self} mutation`,
query,
variables
);
return;
}
// If this item exists previously, splice it out
const existingIndex = _.findIndex(data.items.data, { id: result.id });
let existing = {};
if (existingIndex > -1) {
existing = data.items.data[existingIndex];
data.items.data.splice(existingIndex, 1);
}
data.items.data.unshift({ ...existing, ...result });
proxy.writeQuery({ query, variables, data });
} catch (e) {
console.error(
`Could not add %o to ${which} query from ${self} mutation`,
mutationResult
);
console.trace(e);
}
}; I have a removeItemFromQuery helper which is nearly identical to unshiftItemToQuery. Created the helpers and use the "self" const etc. as we have (right now) 20+ mutations this can run on and duplicating this in every mutation was a logistical nightmare. The removals that are happening in the updateQueries call now can (and will) be moved into update and handled the same way... If you're doing infinite scroll or any type of query pagination, connections are a must! |
On our backend, we decided to have a state on our record of We then had some middleware on our backend that, if inactive, we nullified all values (alternatively, you could hard delete the values in the db). Finally, on our frontend, we then filtered by status of We didn't want to manually handle the Apollo state in our app so this was our workaround solution. |
I finally found a solution for my case with multiple queries. There is a race condition when using In my case the I have many different queries (sorting/filtering) objects.
The user can access the detail view of an object from any list to remove an object using a mutation. When the object is deleted from database I need to update the detail view to redirect out and also the query results for each list which could contain that object. I don't want to remove the object manually from each list with
If I wait for the server response to remove the object reference from the store the queries are refetched but the UI still shows the removed object in the lists.
Updating the object reference before
The UI is updated correctly with the data from the lists refetched (not from the store). This is a kind of optimistic response because I'm updating the reference in the store before the object is removed from database.
Hope this helps |
We tried a different way by making use of a query's Goals
Brief explanation We have a We looked at a query's To achieve this we need a The sequence is as follows:
Example Connected component query: const BookFeedContainer = graphql(
BookQueries.bookFeed,
{
options: (props: IOwnProps) => {
const bookFilter: InputBookFilter = props.bookFilter;
// BookQueries is a namespace under which we host all
// BookQuery graphql related info (fragments, queries, mutations, types etc)
const queryName = BookQueries.names.bookFeed;
const inputKey = InputBookFilter.toKey(bookFilter);
// every time the query runs its fetch policy is determined by the queryCache
const fetchPolicy = queryCache.getFetchPolicy(queryName, inputKey);
return ({
variables: {
bookFilter: props.bookFilter,
},
fetchPolicy,
});
},
}
)(BookFeed); Somewhere else in the app a mutation occurs: handleClickDeleteBook = () => {
this.props.deleteBook()
.then((result: any) => {
// invalidate (refetch) all related queries if the deletion succeeds
// this causes the queryCache to provide a "network-only" fetchPolicy
// the next time the bookFeed query runs
// NOTE that this does NOT cause the query to be invoked right away
// if you need the query update to run immediately you must still call updateQueries
// we do that on queries that are visible on the page
queryCache.refetch([BookQueries.names.bookFeed]);
// ... rest of code
})
.catch(err => {
console.error(getGraphQLErrorMessage(err));
} Query cleanup - most likely in queryCache.clean(BookQueries.names.bookFeed); Query Filtering Each query has a list of input ids (inputKey in our case) keyed in on the query cache by query name: queryCache = {
[queryName:string]: Array<inputKey:string>
} If an input filter's key is in the query's cached list then the query / inputFilter combination is "clean" and can be fetched from the cache (with fallback to network = default apollo config). If the input filter's key is NOT in the list or the list is missing from the cache then the query / filter combination is dirty and it must be fetched from the network. This design allows to mark all query / filter combinations as dirty (either on a delete or add op) by just deleting the query's list of input keys (note that I do not need to know all the filter keys that the user might have accessed): import queryCache from "query-cache"; // this is a singleton
const queryName1 = "BookQueries_bookFeed"
const queryName2 = "SomeOtherRelatedQuery_name"
// this invalidates each queryName's list on the cache thus forcing
// ALL combinations of each query to be refetched from the network on next call
queryCache.refetch([queryName1, queryName2]); Then when executing the query I can retrieve the fetch policy directly from the queryCache: const fetchPolicy1 = queryCache.getFetchPolicy(queryName1, inputKey); // cache-first or network-only
// or without filtering
const fetchPolicy2 = queryCache.getFetchPolicy(queryName2); Detailed code queryCache.ts: /**
* Hash map to keep track of all queries executed so far. Each query
* can have multiple input filters. When the query is invalidated (isDirty)
* then the cache policy defaults to "network-only". Else it's "cache-first".
* Each time an input is fetched its key is placed into the inputKeys list
* for the specific query. Remove its key from the list to force it to be
* loaded from the network the next time
*/
class QueryCache {
private cache = {};
/**
* Each query has a list keyed in by the query name. The list contains
* the keys for all the inputs that have been previously fetched
* and are "clean" (they can be safely retrieved from the cache). If an
* input's key is not in the list then it must be refetched from the network.
* If the list is missing from the caceh then any variation of the query
* (or a query without input params) must be refetched from the network.
*
* @param {string} queryName - The graphql name of the query being targeted.
*/
private getQueryList = (queryName: string) => this.cache[queryName];
/**
* Checks whether a specific input key for a given query should be
* re-fetched. Only inputs that are already in the inputKeys are "clean".
*
* @param {string} queryName - graphql name of the query being targeted.
* @param {string | undefined} inputKey - key uniquely identifying
* the query's input parameters.
*/
private isDirty = (queryName: string, inputKey?: string) => {
const queryList = this.getQueryList(queryName);
// if no query list on the cache then query is dirty
// re-fetch from the network
if (!queryList) {
return true;
}
return inputKey ? this.getQueryList(queryName).indexOf(inputKey) < 0 : false;
};
/**
* Invalidates the query's entire input list. All versions of the query regardless
* of input will be re-fetched from the network.
*
* @param {Array<string>} queryNames - list of names for all queries that should
* be re-fetched from the network on the next call.
*/
refetch = (queryNames: Array<string> = []) => {
queryNames.forEach((name) => { delete this.cache[name] });
}
/**
* Releases this query / filter(s) combination from fetch network-only constraints.
*
* @param {string} queryName - graphql name of the query being released
* from network-only constraints.
* @param {string | undefined } inputKey - key uniquely identifying
* the query's input parameters.
*/
clean = (queryName: string, inputKey?: string) => {
const queryList = this.getQueryList(queryName) || [];
queryList.push(inputKey);
if (!this.cache[queryName]) {
this.cache[queryName] = queryList;
}
}
/**
* Provide the appropriate fetch policy according to the cache's status.
*
* @param {string} queryName - graphql name of the query for which the
* fetch policy is being retrieved.
* @param {string | undefined } inputKey - key uniquely identifying
* the query's input parameters.
*/
getFetchPolicy = (queryName: string, inputKey?: string) => (
this.isDirty(queryName, inputKey) ? "network-only" : "cache-first"
)
}
const queryCache = new QueryCache();
export default queryCache; // singleton bookQueries.ts (simplified - in reality we use typescript namespaces): export const BookQueries = {
bookFeed: gql`
query BookQueries_bookFeed($bookFilter: InputBookFilter!, $cursor:String){
bookFeed(bookFilter:$bookFilter, cursor:$cursor)
@connection(key: "bookFeed", filter:["bookFilter"]) {
list {
id
}
cursor
hasNext
}
}
`,
names: {
bookFeed: "BookQueries_bookFeed",
}
}; InputBookFilter.ts (also simplified): export default class InputBookFilter {
private privacy: boolean
constructor(privacy:string = "private"){
this.privacy = privacy;
}
/**
* All this has to do is provide a unique key given the input parameters.
*/
static toKey = (filter: InputBookFilter) => `inputBookFilter_${filter.privacy}`;
} We haven't yet explored all the edge cases but on the first immediate try this works really well for us. |
@maierson thanks for sharing your approach, it did sound very intriguing, just wondering have you discovered any drawbacks or hard-to-handle edge cases later? |
@coodoo so far it works really well for us. If I run into anything meaningful I'll make sure to update this thread. The next challenge I see is how to handle a delete / creation in a list where a user has already downloaded a very long list of (paged) items. Do we re-fetch the entire list or just part of it based on some form of cursor data? For deletes: this is where I feel it would be useful to be able to evict an item from the apollo cache and have all queries containing it update locally to reflect the removal. Having some experience with writing a client side cache I am aware of the complexities and risks of inconsistency that this can introduce in the data. However it's a very high value add if implemented properly. |
In prisma they support the transactional mutations concept, basically cascading deletes with the directive ( Maybe apollo should retrieve those logics and apply those rules for the cache so whenever a node is deleted from the cache (using Obviously this could only work with Prisma in the first place, but it can open a chapter to support other backend framework and database (at the end Prisma uses MySQL and cascading deletes is nothing unfamiliar for most of the DB engines) |
This is actually not true. If there are multiple queries, with different parameters, that return a list of items, each one will have a different list of result item IDs in the cache, because the client has no way of knowing which items are included for a given set of parameters. This is why updates to fields within those items magically work but deletes don't, because the id of the deleted item has to be removed from each of those lists. But if Apollo merely provided a way to Mark a given typename/id pair as invalidated, it could automatically refetches any queries including that item in their result set. I have started using helper functions to look through Next I plan to fetch enough type metadata from the backend to be able to scan all levels of the active queries for the deleted item and refetch any that contain it. |
I just made a library to magically refetch all relevant active queries after creates and deletes! It still has some limitations I need to work through, |
I just realized that updating the cache after associating/dissociating objects can also be well served by |
@jedwards1211 this is a neat idea but would be very chatty and data intensive if there are a lot of updates. We have a chat app that has conversations and it receives messages via subscriptions, if it were to reload all the messages of a conversation every time a new message was added then it would use a lot of data for mobile users, and not be that responsive as it adds another network call when the data is already in the store. We first solved this using redux-observable and keeping a A new approach we are trying is to use our own cache to replace 'apollo-cache-inmemory' which emits a stream of events when new data is written, or update, to the normalised cache. We then have The goal is to have data changes automatically update relevant queries that are saved in the cache, without network fetching. If anyone is interested in this I can share the code but it's at an early stage. |
@ravenscar Well I'm not saying you should use it for any and every use case, and obviously it doesn't make sense for that case. If it doesn't seem useful for anything else in your app, then your app probably just has different needs than mine. It sounds like you would agree with me that custom I'm a bit confused how the custom cache approach works in your new message example. When a new message arrives via subscription it gets written to the cache, and then a listener picks that up? |
If I were to go full on in the opposite direction from refetching, I would probably just design a system to listen to all the relevant events for a given query on my database and send events to the client specific to each query telling it what to add/remove/change in its result set, rather that trying to write a lot of custom update logic on the client. My data is very relational, so the number of update cases I would have to handle for each query would break my brain. |
What I really wish I could do is ask the |
@ravenscar I should also clarify what I'm using the magic refetching for right now in my app: whenever a user creates or deletes an organization, another user, a user group, a device, a device group, etc. or changes associations between them. These are infrequent operations so the extra refetching is a small price to pay for a strong guarantee of correct updates without handling numerous cases in custom In frequent update cases like messages, you're still better off doing something custom. |
@jedwards1211 Hey I'm not saying what you've done is isn't useful - sorry if it came across that way! We did try this initially (refetching active queries) but for us it was too data heavy.
I do agree that the logic belongs with the Query and not the Mutation - the update logic in the mutation is ass-backwards in my opinion. It would be nice if apollo queries had something like reducers which worked on the updates that hit the store, but unfortunately that would probably require some kind of normalisation and I don't think this is a prerequisite for the store (even though that's exactly how it works).
We have a way of detecting changes in the cache and it's exposed as an rxjs observable. This can be subscribed to by many observers which often manage the state of queries, and they add or remove the changed entity from collections. If you use redux this is somewhat similar to a reducer where the state would be the result of the query and the actions would be the entity changes. We use a CQRS paradigm not CRUD so it's a bit easier to detect changes, but I don't think that really matters too much.
It would be really nice if Basically roll your own cache like this and track __typenames When you create the |
I get that you're detecting changes to the cache, but what initial change do you make when an object gets deleted? Do you set something to null? Also if you have plans of making this a general purpose lib it might be better to use zen-observable since Apollo is using it, though that's just my opinion. |
If you want to delete it from the store you can call By far the easiest approach for us has been to have a field on the object called "deleted" then when you delete an object just send it through as This all falls into cache invalidation side of the two hard things of computer science and it's seems really difficult (perhaps impossible?) to find a generalised solution. Certainly I haven't seen any solution people are actually happy with with respect to apollo's cache. |
Sorry to chime in so late - I've just run across the same issues as the rest of you. Now, just to confirm for my own sanity - when deleting an object, you need to find every single reference of it in the client and remove them all manually? Is that not kind of a huge step back from Redux + Normalizr? At least in that case all your entities are in the same place. Feels a bit like jQuery 😄 |
Normalizr provides a way to automatically delete all references to an object without violating the types you've defined? |
@MitchEff In normalizr gaearon recommends using a flag to indicate deleted: https://github.com/paularmstrong/normalizr/issues/21 |
I did mine a kind of funny way, but takes advantage of the fact that every entity is just a key in your state. In my 'deleteEntity' action, I pass in type, entity and parentType. I can then go:
It's a bit simplified above, but you see what I mean. It's not perfect, but does 99% of the work in just a couple lines. I don't think you can achieve something quite as neat in GraphQL - you'd need to find every query that mentions the entity's parent and remove it manually, right? |
To help provide a more clear separation between feature requests / discussions and bugs, and to help clean up the feature request / discussion backlog, Apollo Client feature requests / discussions are now being managed under the https://github.com/apollographql/apollo-feature-requests repository. This issue has been migrated to: apollographql/apollo-feature-requests#5 |
What is the best practice for deleting items from the apollo store when a mutation deletes them on the server? I could not find anything related to this in the docs. Do I just have to return ids of deleted items as a result of mutation and then use reducer/updatequeris to manually delete items from the store? Or there is some way to let apollo client handle it automatically? Any help is appreciated.
The text was updated successfully, but these errors were encountered: