Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(graphcache) - Add mergeMode to simplePagination helper #1174

Merged
merged 9 commits into from
Nov 20, 2020
Merged
5 changes: 5 additions & 0 deletions .changeset/little-crabs-sell.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'@urql/exchange-graphcache': minor
---

Add a `mergeMode: 'before' | 'after'` option to the `simplePagination` helper to define whether pages are merged before or after preceding ones when pagination, similar to `relayPagination`'s option
11 changes: 6 additions & 5 deletions docs/api/graphcache.md
Original file line number Diff line number Diff line change
Expand Up @@ -314,7 +314,7 @@ cache.readFragment(
`,
{ id: 1 }, // this identifies the fragment (User) entity
{ groupId: 5 } // any additional field variables
)
);
```

[Read more about using `readFragment` on the ["Computed Queries"
Expand Down Expand Up @@ -473,10 +473,11 @@ on the "Computed Queries" page.](../graphcache/computed-queries.md#pagination)
Accepts a single object of optional options and returns a resolver that can be inserted into the
[`cacheExchange`'s](#cacheexchange) [`resolvers` configuration.](#resolvers-option)

| Argument | Type | Description |
| ---------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `offsetArgument` | `?string` | The field arguments' property, as passed to the resolver, that contains the current offset, i.e. the number of items to be skipped. Defaults to `'skip'`. |
| `limitArgument` | `?string` | The field arguments' property, as passed to the resolver, that contains the current page size limit, i.e. the number of items on each page. Defaults to `'limit'`. |
| Argument | Type | Description |
| ---------------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `offsetArgument` | `?string` | The field arguments' property, as passed to the resolver, that contains the current offset, i.e. the number of items to be skipped. Defaults to `'skip'`. |
| `limitArgument` | `?string` | The field arguments' property, as passed to the resolver, that contains the current page size limit, i.e. the number of items on each page. Defaults to `'limit'`. |
| `mergeMode` | `'after' \| 'before'` | This option defines whether pages are merged before or after preceding ones when paginating. Defaults to `'after'`. |

Once set up, the resulting resolver is able to automatically concatenate all pages of a given field
automatically. Queries to this resolvers will from then on only return the infinite, combined list
Expand Down
62 changes: 43 additions & 19 deletions docs/graphcache/computed-queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ order: 2
# Computed Queries

When dealing with data we could have special cases where we want to transform
the data between the API and frontend logic, for example:
the data between the API and frontend logic. For example:

- alter the format of a date, perhaps from a UNIX timestamp to a `Date` object.
- if we have a list of a certain entity in the cache and next we want to query a
- if we have a list of a certain entity in the cache and then want to query a
specific entity, chances are this will already be (partially) available in the
cache's list.

Expand Down Expand Up @@ -60,7 +60,7 @@ Our cache methods have three arguments:

- `entity` – This can either be an object containing a `__typename` and an `id` or
`_id` field _or_ a string key leading to a cached entity.
- `field` – The field you want data for. This can be a relation or a single property.
- `field` – The field we want data for. This can be a relation or a single property.
- `arguments` – The arguments to include on the field.

To get a better grasp let's look at a few examples,
Expand Down Expand Up @@ -106,9 +106,9 @@ console.log(name); // 'Bar'
```

This can help solve practical use cases like date formatting,
where you would query the date and then convert it in your resolver.
where we would query the date and then convert it in our resolver.

You can also link entities that come from a list, imagine the scenario where
We can also link entities that come from a list, imagine the scenario where
we have queried `todos` but now want the detailView of a single `todo`.

```js
Expand All @@ -124,11 +124,11 @@ cache resolve this to the full entity.

Note that resolving from a list to details can lead to partial data, this will result in
a network-request to get the full data when fields are missing.
When graphcache isn't [aware of your schema](./schema-awareness.md) it won't show partial data.
When graphcache isn't [aware of our schema](./schema-awareness.md) it won't show partial data.

### Reading a query

Another method the cache allows is to let you read a full query, this method
Another method the cache allows is to let us read a full query, this method
accepts an object of `query` and optionally `variables`.

```js
Expand Down Expand Up @@ -169,7 +169,7 @@ fragment.

### Simple Pagination

Given you have a schema that uses some form of `offset` and `limit` based pagination you can use the
Given we have a schema that uses some form of `offset` and `limit` based pagination, we can use the
`simplePagination` exported from `@urql/exchange-graphcache/extras` to achieve an endless scroller.

This helper will concatenate all queries performed to one long data structure.
Expand All @@ -187,20 +187,44 @@ const cache = cacheExchange({
});
```

This form of pagination accepts an object as an argument, you can specify two
This form of pagination accepts an object as an argument, we can specify two
options in here `limitArgument` and `offsetArgument` these will default to `limit`
and `skip` respectively. This way you can use the keywords that you are using in
your queries.
and `skip` respectively. This way we can use the keywords that are in our queries.

We may also add the `mergeMode` option, which defaults to `'after'` and can otherwise
be set to `'before'`. This will handle in which order pages are merged when paginating.
The default `after` mode assumes that pages that come in last should be merged
_after_ the first pages. The `'before'` mode assumes that pages that come in last
should be merged _before_ the first pages, which can be helpful in a reverse
endless scroller (E.g. Chat App).

Example series of requests:

```
// An example where mergeMode: after works better
skip: 0, limit: 3 => 1, 2, 3
skip: 3, limit: 3 => 4, 5, 6

mergeMode: after => 1, 2, 3, 4, 5, 6 ✔️
mergeMode: before => 4, 5, 6, 1, 2, 3

// An example where mergeMode: before works better
skip: 0, limit: 3 => 4, 5, 6
skip: 3, limit: 3 => 1, 2, 3

mergeMode: after => 4, 5, 6, 1, 2, 3
mergeMode: before => 1, 2, 3, 4, 5, 6 ✔️
```

### Relay Pagination

Given you have a [relay-compatible schema](https://facebook.github.io/relay/graphql/connections.htm)
on your backend we offer the possibility of endless data resolving.
This means that when you fetch the next page in your data
received in `useQuery` you'll see the previous pages as well. This is useful for
Given we have a [relay-compatible schema](https://facebook.github.io/relay/graphql/connections.htm)
on our backend, we can offer the possibility of endless data resolving.
This means that when we fetch the next page in our data
received in `useQuery` we'll see the previous pages as well. This is useful for
endless scrolling.

You can achieve this by importing `relayPagination` from `@urql/exchange-graphcache/extras`.
We can achieve this by importing `relayPagination` from `@urql/exchange-graphcache/extras`.

```js
import { cacheExchange } from '@urql/exchange-graphcache';
Expand All @@ -217,7 +241,7 @@ const cache = cacheExchange({

`relayPagination` accepts an object of options, for now we are offering one
option and that is the `mergeMode`. This defaults to `inwards` and can otherwise
be set to `outwards`. This will handle how pages are merged when you paginate
be set to `outwards`. This will handle how pages are merged when we paginate
forwards and backwards at the same time. outwards pagination assumes that pages
that come in last should be merged before the first pages, so that the list
grows outwards in both directions. The default inwards pagination assumes that
Expand All @@ -237,8 +261,8 @@ last: 1, before: c => node 89, startCursor: d
With inwards merging the nodes will be in this order: `[1, 2, ..., 89, 99]`
And with outwards merging: `[..., 89, 99, 1, 2, ...]`

The helper happily supports schemata that return nodes rather than
individually-cursored edges. For each paginated type, you must either
The helper happily supports schema that return nodes rather than
individually-cursored edges. For each paginated type, we must either
always request nodes, or always request edges -- otherwise the lists
cannot be stiched together.

Expand Down
134 changes: 131 additions & 3 deletions exchanges/graphcache/src/extras/simplePagination.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import { query, write } from '../operations';
import { Store } from '../store';
import { simplePagination } from './simplePagination';

it('works with simple pagination', () => {
it('works with forward pagination', () => {
const Pagination = gql`
query($skip: Number, $limit: Number) {
persons(skip: $skip, limit: $limit) {
Expand Down Expand Up @@ -73,6 +73,76 @@ it('works with simple pagination', () => {
expect(pageThreeResult.data).toEqual(null);
});

it('works with backwards pagination', () => {
const Pagination = gql`
query($skip: Number, $limit: Number) {
persons(skip: $skip, limit: $limit) {
__typename
id
name
}
}
`;

const store = new Store({
resolvers: {
Query: {
persons: simplePagination({ mergeMode: 'before' }),
},
},
});

const pageOne = {
__typename: 'Query',
persons: [
{ id: 7, name: 'Jovi', __typename: 'Person' },
{ id: 8, name: 'Phil', __typename: 'Person' },
{ id: 9, name: 'Andy', __typename: 'Person' },
],
};

const pageTwo = {
__typename: 'Query',
persons: [
{ id: 4, name: 'Kadi', __typename: 'Person' },
{ id: 5, name: 'Dom', __typename: 'Person' },
{ id: 6, name: 'Sofia', __typename: 'Person' },
],
};

write(
store,
{ query: Pagination, variables: { skip: 0, limit: 3 } },
pageOne
);
const pageOneResult = query(store, {
query: Pagination,
variables: { skip: 0, limit: 3 },
});
expect(pageOneResult.data).toEqual(pageOne);

write(
store,
{ query: Pagination, variables: { skip: 3, limit: 3 } },
pageTwo
);

const pageTwoResult = query(store, {
query: Pagination,
variables: { skip: 3, limit: 3 },
});
expect((pageTwoResult.data as any).persons).toEqual([
...pageTwo.persons,
...pageOne.persons,
]);

const pageThreeResult = query(store, {
query: Pagination,
variables: { skip: 6, limit: 3 },
});
expect(pageThreeResult.data).toEqual(null);
});

it('handles duplicates', () => {
const Pagination = gql`
query($skip: Number, $limit: Number) {
Expand Down Expand Up @@ -182,7 +252,7 @@ it('should not return previous result when adding a parameter', () => {
expect(res.data).toEqual({ __typename: 'Query', persons: [] });
});

it('should preserve the correct order', () => {
it('should preserve the correct order in forward pagination', () => {
const Pagination = gql`
query($skip: Number, $limit: Number) {
persons(skip: $skip, limit: $limit) {
Expand All @@ -196,7 +266,7 @@ it('should preserve the correct order', () => {
const store = new Store({
resolvers: {
Query: {
persons: simplePagination(),
persons: simplePagination({ mergeMode: 'after' }),
},
},
});
Expand Down Expand Up @@ -240,6 +310,64 @@ it('should preserve the correct order', () => {
});
});

it('should preserve the correct order in backward pagination', () => {
const Pagination = gql`
query($skip: Number, $limit: Number) {
persons(skip: $skip, limit: $limit) {
__typename
id
name
}
}
`;

const store = new Store({
resolvers: {
Query: {
persons: simplePagination({ mergeMode: 'before' }),
},
},
});

const pageOne = {
__typename: 'Query',
persons: [
{ id: 7, name: 'Jovi', __typename: 'Person' },
{ id: 8, name: 'Phil', __typename: 'Person' },
{ id: 9, name: 'Andy', __typename: 'Person' },
],
};

const pageTwo = {
__typename: 'Query',
persons: [
{ id: 4, name: 'Kadi', __typename: 'Person' },
{ id: 5, name: 'Dom', __typename: 'Person' },
{ id: 6, name: 'Sofia', __typename: 'Person' },
],
};

write(
store,
{ query: Pagination, variables: { skip: 3, limit: 3 } },
pageTwo
);
write(
store,
{ query: Pagination, variables: { skip: 0, limit: 3 } },
pageOne
);

const result = query(store, {
query: Pagination,
variables: { skip: 3, limit: 3 },
});
expect(result.data).toEqual({
__typename: 'Query',
persons: [...pageTwo.persons, ...pageOne.persons],
});
});

it('prevents overlapping of pagination on different arguments', () => {
const Pagination = gql`
query($skip: Number, $limit: Number, $filter: string) {
Expand Down
32 changes: 18 additions & 14 deletions exchanges/graphcache/src/extras/simplePagination.ts
Original file line number Diff line number Diff line change
@@ -1,14 +1,18 @@
import { stringifyVariables } from '@urql/core';
import { Resolver, Variables, NullArray } from '../types';

export type MergeMode = 'before' | 'after';

export interface PaginationParams {
offsetArgument?: string;
limitArgument?: string;
mergeMode?: MergeMode;
}

export const simplePagination = ({
offsetArgument = 'skip',
limitArgument = 'limit',
mergeMode = 'after',
}: PaginationParams = {}): Resolver => {
const compareArgs = (
fieldArgs: Variables,
Expand Down Expand Up @@ -74,21 +78,21 @@ export const simplePagination = ({
continue;
}

if (!prevOffset || currentOffset > prevOffset) {
for (let j = 0; j < links.length; j++) {
const link = links[j];
if (visited.has(link)) continue;
result.push(link);
visited.add(link);
}
const tempResult: NullArray<string> = [];

for (let j = 0; j < links.length; j++) {
const link = links[j];
if (visited.has(link)) continue;
tempResult.push(link);
visited.add(link);
}

if (
(!prevOffset || currentOffset > prevOffset) ===
(mergeMode === 'after')
) {
result = [...result, ...tempResult];
} else {
const tempResult: NullArray<string> = [];
for (let j = 0; j < links.length; j++) {
const link = links[j];
if (visited.has(link)) continue;
tempResult.push(link);
visited.add(link);
}
result = [...tempResult, ...result];
}

Expand Down