Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Imperative read and write methods #1310

Merged
merged 36 commits into from
Mar 1, 2017
Merged

Imperative read and write methods #1310

merged 36 commits into from
Mar 1, 2017

Conversation

calebmer
Copy link
Contributor

@calebmer calebmer commented Feb 18, 2017

This PR adds imperative read and write methods to ApolloClient. It is based off #1280 which was closed when I renamed the branch (initially this PR was only going to have a read implementation).

This PR contains the read method of #1280 and also adds write. Some changes to read from #1280 include a move to positional arguments over named arguments (it just looks nicer), and removing the returnPartialData option since we plan to remove that soonish anyway.

The PR adds 6 methods which are all variations of imperative reads and writes: readQuery, readFragment, writeQuery, writeFragment, writeQueryOptimistically, and writeFragmentOptimistically. All of these methods have different signatures which warrant them being separate methods instead of combining everything into two read and write methods. If we wanted to combine everything into two methods there would be a lot of ugly duck typing which could lead to bugs. The multiple methods approach makes it very explicit what each function is going to do in regards to the cache, and is easier to remember. Not to mention that VS Code type-ahead is much more useful! Also, if we wanted to combine write(Query|Fragment) with write(Query|Fragment)Optimistically then we would need to decide what to return which isn’t super easy.

We decided on an imperative store read/write design in #1224

  • If this PR is a new feature, reference an issue where a consensus about the design was reached (not necessary for small changes)
  • Make sure all of the significant new logic is covered by tests
  • Rebase your changes on master so that they can be merged easily
  • Make sure all tests and linter rules pass
  • Update CHANGELOG.md with your change
  • If this was a change that affects the external API, update the docs and post a link to the PR in the discussion

Copy link
Contributor

@helfer helfer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really great @calebmer! I wrote a lot of comments, but that's only because you wrote a ton of tests, which is exactly what we want.
However, if we had some lower-level unit tests for new functions (like getFragmentQuery and rollbackOptimisticData) we could get away with writing fewer integration tests (which the current tests essentially are).

Also, this is a pretty major addition to the API, so it should go along with an addition to the docs.

I imagine this feature will be primarily used for updating the store during a mutation. One option would be to add a function argument to mutations that lets people do that in the style (read, write, <etc>) => {}, but maybe there's a simpler solution. In any case, we should document it well.

CHANGELOG.md Outdated
@@ -4,6 +4,7 @@ Expect active development and potentially significant breaking changes in the `0

### vNEXT
- Prefer stale data over partial data in cases where a user would previously get an error. [PR #1306](https://github.com/apollographql/apollo-client/pull/1306)
- Add imperative read and write methods to provide the user the power to interact directly with the GraphQL data cache. [PR #1310](https://github.com/apollographql/apollo-client/pull/1310)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realized that we could justifiably also call this declarative rather than imperative, because we let people operate at the higher level of abstraction (denormalized data vs. normalized data).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe just use plain language like “direct cache manipulation” instead of fighting the declarative vs. imperative fight 😉

* query.
*
* You must pass in a GraphQL document with a single fragment or a document
* with multiple fragments that represent what you are writing. If you pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be "reading", not "writing"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

}

/**
* Writes some data to the store without that data being the result of a
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the "result of a network request" is important.

/**
* Writes some data to the store without that data being the result of a
* network request. This method will start at the root query. To start at a a
* specific id returned by `dataIdFromObject` then use `writeFragment`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's an extra "a" and "then".

/**
* Writes some data to the store without that data being the result of a
* network request. This method will write to a GraphQL fragment from any
* arbitrary id that is currently cached. Unlike `writeQuery` which will only
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make that a comma instead of a fullstop.

}, 'Found 3 fragments. `fragmentName` must be provided when there are more then 1 fragments.');
});

it('will read some data from state with variables', () => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test seems to have been duplicated (see line 77)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have to look closely, but they are in different suites 😊

One is for readQuery and the other is for readFragment. I did basically just copy/paste the tests and swap out readQuery for readFragment. Same with the write tests.

assert.deepEqual(getOptimisticData(client), []);
});

it('will throw an error when there is no fragment', () => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, let's move these to the front.

* fragment specified by the provided `fragmentName`. If there is more then one
* fragment, but a `fragmentName` was not defined then an error will be thrown.
*/
export function getFragmentQuery(document: DocumentNode, fragmentName?: string): DocumentNode {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function should make sure that there isn't already an operation definition in the document and throw an error if there is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I think there are no tests for this function currently? If that's true, we should add some.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. I didn’t add the operation check because an error will get thrown later on in other functions, but it does make sense to add one. I’ll also add tests.

* in a document with multiple fragments then you must also specify a
* `fragmentName`.
*/
public readFragment<FragmentType>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just out of curiosity: what error gets thrown if you provide an id that doesn't exist in the store? I have a hunch that we might want to improve that error message for this function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. It would return an empty object because the fragment pattern matching wouldn’t match anything. I made it return null instead and added a test 👍

* The filter function should return true for all items that we want to
* rollback.
*/
function rollbackOptimisticData (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should write some unit tests for this function as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don’t think we have any tests against our reducers 😣. Correct me if I’m wrong.

I personally think integration tests are more than enough in this case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, we don't currently have tests for reducers, but we really should have them.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have unit tests for all of the reducers, but we do have unit tests for the existing read/write methods.

@calebmer
Copy link
Contributor Author

Made the changes you requested, including adding parameter documentation to the methods on ApolloClient. I recommend you read that as it will generate documentation on the website.

You mentioned a couple of times that you thought you saw duplicate tests. We do pretty much have the same 3 tests duplicated 6 times, the key is the suite. For example, we want to make sure that the same tests that pass against readQuery also pass against readFragment. That’s why you may start seeing some duplicate logic. If we were using Jest and we could write tests that assert certain mocked functions get called I’d be more comfortable only writing a few tests per method, but because that’s not the case integration tests against a few core functionalities are more then sufficient for each method. I’d rather have too many tests then not enough 😉

We probably should add that function to mutations because we can also provide an abstraction over optimistic mutations where changes get automatically rolled back after the real result comes in.

I’ll update the docs and add a link to the PR to this PR when it is ready.

@calebmer
Copy link
Contributor Author

Here’s a proposal for how we can integrate this feature nicely with our mutations API. First, we should add writeTransaction and writeTransactionOptimistically methods. They would be used like so:

client.writeTransaction(transaction => {
  const query = transaction.readQuery(gql`{ ... }`);
  transaction.writeFragment(data, id, gql`{ ... }`);
});

We would expect all of the transaction methods to be executed synchronously so that we could batch them all into one action to the store. client.write* would then just become a transaction of one modification.

Then we could add an option to mutations, update, which will use client.writeTransaction when the mutation resolves and client.writeTransactionOptimistically for optimistic response. The API would then look like:

client.mutate({
  ...,
  update: ({ data, variables }, transaction) => {
    const query = transaction.readQuery(gql`{ ... }`);
    transaction.writeFragment(data, id, gql`{ ... }`);
  },
});

Of course we could recommend users call transaction something different like tx, store, or proxy (the former is taken from Relay).

Thoughts?

@stubailo
Copy link
Contributor

The transaction thing seems pretty sweet!

@helfer
Copy link
Contributor

helfer commented Feb 23, 2017

I think that's a great proposal and you should go ahead and implement it!

I was thinking about automatically mapping the calls to read and write inside update to a transaction and make them optimistic or not depending on whether this is an optimistic result or not, but I think making the transaction explicit as you proposed is nicer. We still have to implement the proper mechanics around making the transaction optimistic first, then roll that back and apply the real transaction, but once you have writeTransaction and writeTransactionOptimistically that should be very easy to do.

I believe that currently undoing an optimistic result and applying the real result is one action, so the observer only triggers once. We should aim to provide the same behavior with transactions to avoid unnecessary re-rendering or worse (flashing UI elements).

@helfer
Copy link
Contributor

helfer commented Feb 25, 2017

@calebmer let's introduce transactions in a different PR, but I'll wait until the docs are updated before merging this one here.

@calebmer
Copy link
Contributor Author

calebmer commented Feb 27, 2017

Just pushed the code which adds the API for:

client.mutate({
  mutation: ...,
  update: (proxy, result) => { ... },
});

It uses a transaction under the hood, but I ended up not exposing transactions through a client.writeTransaction method as I originally intended because I didn’t want to write tests, and I couldn’t think of a use case 😊

The code is a lot different now as the implementation was mostly moved out of ApolloClient.ts into data/proxy.ts, so even if you reviewed before I recommend you take another look.

The name proxy is directly stolen from Relay 2, and the imperative API on mutations turned out to look very similar to what the Relay team has proposed in the past. I believe that our API feels more natural then what Relay 2 has because the user can use GraphQL queries and fragments directly. However, our implementation is much slower then Relay 2’s.

Copy link
Contributor

@stubailo stubailo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, this looks like an awesome change. Really great stuff. Just had a few questions, also about the API.

CHANGELOG.md Outdated
@@ -4,6 +4,7 @@ Expect active development and potentially significant breaking changes in the `0

### vNEXT
- Prefer stale data over partial data in cases where a user would previously get an error. [PR #1306](https://github.com/apollographql/apollo-client/pull/1306)
- Add direct cache manipulation read and write methods to provide the user the power to interact with Apollo’s GraphQL data representation outside of mutations. [PR #1310](https://github.com/apollographql/apollo-client/pull/1310)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to rebase the changelog

*
* @param variables Any variables that your GraphQL fragments depend on.
*/
public writeFragment(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do we feel about making this take keyword arguments instead of 5 positional arguments? Long lists of positional arguments are fine in TS/Flow but I find them very hard to use in JS.

Copy link
Contributor Author

@calebmer calebmer Feb 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They were named at first, I just really like the look of:

client.readQuery(gql`{ ... }`);
client.writeQuery({ ... }, gql`{ ... }`);

vs.

client.readQuery({
  query: gql`{ ... }`,
});
client.writeQuery({
  data: { ... },
  query: gql`{ ... }`,
});

and so I decided to make everything positional instead of named. I don’t actually think it is that bad with writeFragment being the longest. As follows is the most common case and the longest case for both readFragment and writeFragment:

// Common

client.readFragment(
  'Todo42',
  gql`fragment todo on Todo { text, completed }`,
);

client.writeFragment(
  { text: 'Clean up', completed: false },
  'Todo42',
  gql`fragment todo on Todo { text, completed }`,
);

// Uncommon

client.readFragment(
  'Todo42',
  gql`
    fragment todo1 on Todo { ...todo2 @include(if: $isTrue) }
    fragment todo2 on Todo { text, completed }
  `,
  'todo1',
  { isTrue: true },
);

client.writeFragment(
  { text: 'Clean up', completed: false },
  'Todo42',
  gql`
    fragment todo1 on Todo { ...todo2 @include(if: $isTrue) }
    fragment todo2 on Todo { text, completed }
  `,
  'todo1',
  { isTrue: true },
);

Most of the time you will only have one fragment and therefore won’t need the fragmentName, and also you would rarely have variables.

I understand the desire for named arguments. I just really like how the 80% case looks with positional arguments 😊. Let me know if you still think named arguments are worth it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can always change it later.

However, looking at that one example, can we reorder the args to always have the same order?

  1. ID (required)
  2. Fragment (required)
  3. Data (required for write)
  4. Variables (used sometimes)
  5. Fragment name (used very rarely IMO)

I think there are a few heuristics for ordering:

  1. How commonly will it be used? Required args should go first.
  2. How long is the argument? In JS I think it looks nicer to have a small argument, like an ID, be in the first position.

However in some sense the fact that we have to think about ordering is a bit unfortunate. At the end of the day it doesn't matter that much, but this seems like the kind of API I would need to look up in the docs frequently to remember how to call it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer named arguments even if it makes the 80% use-case more verbose, because it makes the code easier to read. Rather than having to guess what each arg represents, you know right away because it's named.

} {
const optimisticWriteId = (this.optimisticWriteId++).toString();
this.initStore();
this.store.dispatch({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these implemented via a direct dispatch, while the others use the proxy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two reasons:

  1. write(Query|Fragment)Optimistically are not included in the proxy. This is intentional because a proxy may implicitly write optimistically depending on the context it is in.
  2. this.optimisticWriteId lives on ApolloClient to generate globally unique ids. This is something we would need to feed into a proxy.

However, now that you mention it, removing write(Query|Fragment)Optimistically would actually remove a bit of complexity from this PR. They were originally added before we had the update function on mutate, however that now serves as a much better way to manage optimistic writes.

I don’t think anyone would miss these methods if we removed them, and it would reduce complexity. I’m going to go ahead and remove these two methods. If you think there is a case for them we can add them back 👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds great to me! So what you're saying is, people can still use optimistic writes from the update callback in a mutation, but there will be no way to do it in a totally standalone way. That works for me for a first pass.

The only reason you would need optimistic writes otherwise is if you were using REST to do mutations and wanted to do an optimistic update to your store before refetching or something, but that is a somewhat niche thing and can be added later as a feature.

* transaction. The actions are not ready for dispatch in Redux, however. The
* `type` must be added before that.
*/
public finish(): Array<DataWrite> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be called commit?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is private API, so hopefully the user will never actually call it.

I think finish is a better name, however, as this method doesn’t actually do any mutation. If you call finish it doesn’t write anything to the store. It just returns the writes that were collected during the transaction’s lifetime so that the consumer may actually commit the transaction.

The name finish is also derived from the isFinished property which is used to assert in every method that the transaction has not yet finished when the method is called.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

* The filter function should return true for all items that we want to
* rollback.
*/
function rollbackOptimisticData (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have unit tests for all of the reducers, but we do have unit tests for the existing read/write methods.

gql`fragment todoList on TodoList { todos { id text completed __typename } }`,
);

proxy.writeFragment(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This kind of makes me want an updateFragment function:

proxy.updateFragment('TodoList5', gql`...`,
    (data) => ({ ...data, todos: [mResult.data.createTodo, ...data.todos] }));

Either way this test should share the fragment with a variable rather than reproducing it twice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m down for that. The one thing I don’t want, however, is for users to start putting complex logic in the updateFragment function. For instance they may read other results in the updater function:

proxy.updateFragment('TodoList5', gql`...`, data => {
  const data2 = proxy.readFragment('TodoList6', gql`...`);
  return { data2, data, mResult };
});

Another concern is typing. Technically that method would need to take two other arguments. fragmentName and variables. That can be worked around, however, it would just complicate the typings.

I think these concerns are probably not blockers. Let me know if you want this added 👍

We may also consider waiting until a user asks for this feature (I doubt it will take long).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically it just seems like the primary use cases for readFragment and writeFragment basically require using them together.

Does TypeScript/Flow support the concept of func(arg1, [arg2], callback)? Like can you say "The last argument is always a function, regardless of how many one passes in the middle"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, but we could define each variant of the function and achieve type safety that way. However, it gets tricky to implement 😉

Since we may be moving to named arguments this may be a non-issue.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we really need an updateFragment function. read and write fit very nicely with the "imperative" keyword. If after a while it turns out we still want updateFragment we can easily implement it on top of that imho.

@calebmer
Copy link
Contributor Author

Open questions that need to get answered to merge this PR:

  • Should we used named arguments instead of positional arguments?
  • Should we add an updateQuery/updateFragment function?

We should also probably ask the question:

  • What happens if we can’t resolve readQuery/readFragment synchronously. For example we have a shared store with iOS, or we are progressively reading from a cache on disk.

Should of probably cc’d @martijnwalraven sooner, but especially since this is an open question we should address.

@martijnwalraven
Copy link
Contributor

@calebmer: Sounds like we should really sync up on this! I've only looked at the comments so far, but it looks great.

We need to think about this more, but my first thought is that read methods should be async to avoid blocking when reading from the native cache. I realize that makes the API harder to use however. Can we assume we're running in an environment that supports promises?

@stubailo
Copy link
Contributor

read methods should be async to avoid blocking when reading from the native cache

I don't think we should do this as part of the 1.0 release - at some point we'll want to make all of the read APIs async but I don't think now is the best time to do it.

Should we use named arguments instead of positional arguments?

I'll leave it to @helfer to make a final decision.

Should we add an updateQuery/updateFragment function?

Let's go with "no" for now, since we can always add it later very easily. But we should make sure to clearly document that reading and writing with the same fragment is a common way to use it.

Copy link
Contributor

@helfer helfer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really nice, I think it provides a great API for imperative reads / writes.

As for the open questions:

  • we should use named arguments
  • we shouldn't implement updateQuery for now.
  • we should keep async reads + writes in mind, but this is not the PR to implement it, because it will probably require lots of code changes and would break many of our current tests.

The main thing I think needs fixing with this PR (unless I'm overruled) is that currently transactions don't seem to let you read your own writes during a transaction. This is not what you'd get in SQL transactions, it's also not what most users would expect, and it's not even consistent with how the APOLLO_WRITE action is applied to the store (even though that doesn't matter functionally). I think the simplest way of solving this is to write to a separate clone of the store during the transaction, and then reading from that.

}

/**
* Creates an action to be consumed after `finish` is called that writes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strictly speaking this doesn't return an action. At least not a Redux action

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In one uncommit iteration it did. I’ll change the phrasing.

gql`fragment todoList on TodoList { todos { id text completed __typename } }`,
);

proxy.writeFragment(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we really need an updateFragment function. read and write fit very nicely with the "imperative" keyword. If after a while it turns out we still want updateFragment we can easily implement it on top of that imho.

@@ -183,6 +187,23 @@ export function data(
});
}

// If the mutation has some writes associated with it then we need to
// apply those writes to the store by running this reducer again with a
// write action.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might feel less recursive if we split the writeAction reducer code out into a separate function, but it's not critical.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a pattern that is also used by optimistic data.

@calebmer
Copy link
Contributor Author

calebmer commented Feb 28, 2017

I agree on both your changes @helfer. Named arguments, and making sure users can read writes. I’ll work on those now 👍

@calebmer
Copy link
Contributor Author

@helfer arguments are now named instead of positional, and writes are applied locally inside of the transaction as effectively a “preview” of what the data currently looks like. One thought is that we might want to apply those writes lazily, or in the future optimize so that we only “write” when those specific keys are read (something we could do with more advanced dependency tracking).

Let me know if you think this PR needs anything else (besides docs which I should have a PR for tomorrow) 👍

@stubailo
Copy link
Contributor

Wow this looks awesome.

@helfer
Copy link
Contributor

helfer commented Mar 1, 2017

This looks great! I'll add two tests and then merge it:

1) A test that reads during a transaction see writes that happened before
2) A test that makes sure the original store isn't modified during the transaction (now that we're writing to the clone)

edit: never mind I see there's already a test that does that, but it seems I missed it in the diff earlier.

@helfer
Copy link
Contributor

helfer commented Mar 1, 2017

Great stuff @calebmer !

@helfer helfer merged commit 9061b62 into master Mar 1, 2017
@helfer helfer deleted the feat/imperative-read-write branch March 1, 2017 04:59
@helfer helfer removed the in progress label Mar 1, 2017
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 2, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants