Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add retryExchange to core #481

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions docs/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -491,6 +491,16 @@ a corresponding `OperationResult` yet. Any duplicate `Operation` that it
receives is filtered out if the same `Operation` has already been received
and is still waiting for a result.

### retryExchange (Exchange factory)

The `retryExchange` is of type `Options => Exchange`. It periodically retries
requests that fail due to network errors. It accepts two options: `minDelayMs`,
which is the initial delay to retry a request, and `maxDelayMs`, which is the
maximum delay to retry a request.

The `retryExchange` will exponentially increase the delay from `minDelayMs` up
to `maxDelayMs`, with some random jitter added to avoid the [thundering herd problem](https://en.wikipedia.org/wiki/Thundering_herd_problem).

### fallbackExchangeIO (ExchangeIO)

This is an `ExchangeIO` function that the `Client` adds on after all
Expand Down
1 change: 1 addition & 0 deletions docs/exchanges.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ These exchanges can be imported from the `urql` package.
- `fetchExchange`: sends operations to GraphQL HTTP endpoints and resolves results
- `ssrExchange`: used to cache results during SSR and rehydrate them on the client-side
- `subscriptionExchange`: used to support GraphQL subscriptions
- `retryExchange`: retries requests that fail due to network errors

## Addons

Expand Down
1 change: 1 addition & 0 deletions src/exchanges/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ export { dedupExchange } from './dedup';
export { fetchExchange } from './fetch';
export { fallbackExchangeIO } from './fallback';
export { composeExchanges } from './compose';
export { retryExchange } from './retry';

import { cacheExchange } from './cache';
import { dedupExchange } from './dedup';
Expand Down
90 changes: 90 additions & 0 deletions src/exchanges/retry.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
import { Exchange, Operation } from "../";
import {
makeSubject,
share,
pipe,
merge,
filter,
tap,
fromValue,
delay,
mergeMap,
takeUntil
} from "wonka";

interface RetryExchangeOptions {
defaultDelayMs?: number,
maxDelayMs?: number,
}

// Random backoff factor to be used to increase the DELAY to avoid the thundering herd problem
const BACKOFF_FACTOR = Math.random() + 1.5;

export const retryExchange = (options: RetryExchangeOptions): Exchange => {
const MIN_DELAY = options.defaultDelayMs || 1000;
const MAX_DELAY = options.maxDelayMs || 15000;

return ({ forward }) => ops$ => {
const sharedOps$ = pipe(ops$, share);
const [retry$, nextRetryOperation] = makeSubject<Operation>();

const retryWithBackoff$ = pipe(
retry$,
mergeMap(op => {
const { key, context } = op;

let d = context.retryDelay || MIN_DELAY;
if (d * BACKOFF_FACTOR < MAX_DELAY) {
d *= BACKOFF_FACTOR;
}

// We stop the retries if a teardown event for this operation comes in
// But if this event comes through regularly we also stop the retries, since it's
// basically the query retrying itself, so no backoff should be added!
const teardown$ = pipe(
sharedOps$,
filter(op => {
return (
(op.operationName === "query" || op.operationName === "teardown") &&
op.key === key
);
})
);

// Add new retryDelay to operation
return pipe(
fromValue({
...op,
context: {
...op.context,
retryDelay: d
}
}),
// Here's the actual delay
delay(d),
// Stop retry if a teardown comes in
takeUntil(teardown$)
);
})
);

const result$ = pipe(merge([sharedOps$, retryWithBackoff$]), forward, share);

const successResult$ = pipe(
result$,
// We let through all non-network-failed results
filter(res => !res.error || !res.error.networkError)
);

const failedResult$ = pipe(
result$,
filter(res => !!(res.error && res.error.networkError)),
// Send failed responses to the retry$ subject
tap(op => nextRetryOperation(op.operation)),
// Only let through the first failed response
filter(res => !res.operation.context.retryDelay)
);

return merge([successResult$, failedResult$]);
}
};