Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(graphcache): Prevent offlineExchange from issuing duplicate operations #3200

Merged
merged 2 commits into from
May 31, 2023

Conversation

kitten
Copy link
Member

@kitten kitten commented Apr 29, 2023

Follow-up to #3199
Resolves #3093

Set of changes

  • Add a crude teardown+seen routine to failedQueue in offlineExchange's flushQueue

@frederikhors
Copy link
Contributor

frederikhors commented May 6, 2023

I forgot to write here too that this still doesn't fix #3093 as is unfortunately.

See #3093 (comment).

@kitten
Copy link
Member Author

kitten commented May 30, 2023

@frederikhors: Should be fixed now. In short, coming back to this, I think it's best to just force the flushed operations to always use cache-first.

@frederikhors
Copy link
Contributor

frederikhors commented May 31, 2023

Ok. I tried. It works, but.

The cached data is used on the cold load now. ✅

The are no multiple requests for the same call. ✅

But I realized that now on cold start on page reload there is no longer the loading indicator I had before, using the following custom exchange:

const pendingRequestsExchange: Exchange = ({ forward }) => {
  function before(op: Operation) {
    if (op.kind === 'subscription') return;
    if (op.kind && op.kind === 'teardown') {
      console.log('call deletePendingRequest');
      deletePendingRequest(op.key);
    } else {
      console.log('call addPendingRequest');
      addPendingRequest(op.key);
    }
  }
  function after(op: OperationResult) {
    deletePendingRequest(op.operation.key);
  }
  return (ops$) => {
    const forward$ = pipe(ops$, tap(before));
    return pipe(forward(forward$), tap(after));
  };
};

This is happening because it now calls immediately addPendingRequest and deletePendingRequest without waiting the pending request.

This is not what I expect during the cold start with the cache data already there.

@frederikhors
Copy link
Contributor

Added reproduction for the latest issue here: #3093 (comment).

@kitten
Copy link
Member Author

kitten commented May 31, 2023

@frederikhors: I can't debug your exchange for you, but you will have to adjust your after call to be more precise. If this is just to keep track of global loading states, I'd suggest to be more accurate with cache-only operations in after and/or maybe adding a delay or debounce here.

Edit: For now, I'll consider it solves, since I'm assuming your after sees an itermediary result

@kitten kitten merged commit 89b5f8c into main May 31, 2023
@kitten kitten deleted the fix/offline-storage-timing-2 branch May 31, 2023 09:56
@github-actions github-actions bot mentioned this pull request May 31, 2023
@frederikhors
Copy link
Contributor

frederikhors commented May 31, 2023

Thank you for the answer, @kitten.

you will have to adjust your after call to be more precise

The thing is the after() function is NEVER called on the first load. LOL.

Is firstly called the before else branch with op:

{"key": 6065549779, "kind": "query", ...}

and then IMMEDIATELY the before if (op.kind && op.kind === 'teardown') branch with op:

{"key": 6065549779, "kind": "teardown", ...}

Can you help me understand if this is a "new behavior" or is my exchange?

I do not understand why the teardown operation is issued. If this is a cache-and-network call it should wait for the server answer.

This only happens at the reload of the page, on what I call "cold start".

This new behavior is reproduced here: https://codesandbox.io/p/sandbox/issue-urql-6-duplicated-requests-forked-0ybcue. You can see the after call is never issued.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Using [email protected] or 6 the cache in the default storage is not loaded anymore on the first page load
2 participants