-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Placeholder issue for feedback on @defer
& @stream
proposal
#2848
Comments
@IvanGoncharov It would be helpful if we could cut a new |
@danielrearden Planning to have bug fix release for |
@danielrearden pointed out this thread after I added feedback on the PR. My question was around whether the
Commenting on @danielrearden's followup:
I don't think it'd necessarily need to be an ahead of time validation (though that could be a nice feature), I was thinking it'd be similar to Lines 155 to 161 in cd273ad
Except in this case it'd be adding the check to the export function execute(args: ExecutionArgs): PromiseOrValue<ExecutionResult> {
const result = executeImpl(args);
// Assert that the execution wasn't an async iterable.
if (isAsyncIterable(result)) {
result.return()
throw new Error('Streaming GraphQL must be called with executeStream.');
}
return result;
} Mostly came up thinking of situations where I've used the |
@tgriesser we did discuss a few similar ideas, and at one point we did implement separate experimental I think long term, most server implementations would want to support AsyncIterable results. Adding a separate function with the current return type ( A new function that always returns an AsyncIterable (with just one result when defer/stream is not used) is an interesting idea that was brought up at one of the working group meetings. I'm not opposed to this approach if this is the direction the library maintainers want to take. I think it does adds some overhead as servers will either have to hold on to the first response to see if the iterable closes or send every response as a chunk encoded connection |
I've implemented @defer & @stream a while ago and wanted to share my experience. list field directive locationI think it would be beneficial if we add a new directive location "LIST_FIELD" to the spec. content negotiationFirst I think it would make sense to add custom vendor headers for both client and server to indicate if they support defer & stream. Possible implementations: The client indicates it is capable of version 2.
The client supports streaming responses.
Server response header:
Why do I think this is important? GraphQL Operations might be re-used by different clients or stored. Therefore it's not always the case that the client who initially wrote the Operation will also send it. By using content negotiation clients and servers can decide if they "upgrade" the request to be a defer/stream request or not. Another use case is proxies & API gateways. It might not be desirable for an intermediate to ask for a deferred response. With content negotiation, it could still use the same Operation but is capable to disable defer+stream via a header. If the client & server don't use the headers specified we can assume they are pre defer/stream and disable it by default, making it backwards compatible. response formatThe current RFC proposes a custom format with "path", "data" and "hasNext". The first response should be an ordinary GraphQL response object. Subsequent streaming responses will be JSON Patch objects, delimited with two newline characters. The final response will be followed by three newline characters, indicating the end of the stream. Using newline characters instead of "hasNext" makes it easy for clients to understand the state of the stream without looking into the payload directly. |
Tbh I'm not yet sure what is the best way to represent this. |
Would we ever need any other JSON Patch operation than "add"? If not, we could just:
Then it would be really easy to generate a JSON Patch from the GQL format: const oldGqlFormat = {
data: data,
path: gqlPath,
label,
hasNext,
errors,
extensions,
};
const newGqlFormat = {
value: data,
path: jsonPointer,
label,
hasNext,
errors,
extensions,
};
const jsonPatch = [
{
op: 'add',
// We would also be spreading a bunch of other stuff here,
// but JSON Patch spec says they will be ignored so we should be ok
...newGqlFormat,
},
]; Going full JSON Patch also sounds interesting. Maybe we could also use the "replace" operation for updating existing data in some other proposal. |
@AlicanC I'm already doing exactly what you're proposing. This also works well for adding Errors while resolving deferred/streamed responses. Here's a snippet how a client looks like: let hasMessages = false;
let lastMessage:any;
source = new EventSource("example.com/graphql");
source.onmessage = e => {
const nextMessage = JSON.parse(e.data);
if (lastMessage !== undefined && nextMessage instanceof Array){
lastMessage = applyPatch(lastMessage, nextMessage).newDocument;
opts.next(lastMessage);
} else {
opts.next(nextMessage);
lastMessage = nextMessage;
}
hasMessages = true;
} It's based on EventSource is just an implementation detail. Checking if nextMessage is an Array, in this case, is sufficient because JSON Patch Objects are always Arrays whereas the initial GraphQL Response is a JSON Object. "applyPatch" is from the npm package "fast-json-patch". I didn't intentionally choose this approach. |
There's one more topic I'd like to talk about and that's about client performance. So while it looks like a working solution what you propose, it doesn't scale under high load. What happens is that the Javascript client gets completely overwhelmed by the number of Patches to apply. The solution was rather simple, I've introduced batching with a default flush interval. If you want to get a bit more context, I've written about it here: I believe that if you don't address this issue you'll run into problems once you scale streams up. |
@jensneuse thanks for the feedback! Here's our thoughts on these topics content-negotiation
In this case could the clients make use of the query SharedQuery($shouldStream: Boolean) {
myListField(if: $shouldStream, initialCount: 0) {
name
}
} I think this would solve this issue without relying on out-of-band information being passed? Response formathasNextI think there is benefit from Since many GraphQL clients are transport agnostic I think it’s convenient that there is a standard way to determine if a request is still in flight or not, while keeping the transport code fully encapsulated. JSON PatchAs @AlicanC said, the only JSON patch operation that’s needed is For example this payload: {
"data": {
"firstName": "Rob",
"lastName": "Richard",
"username": "robrichard",
},
"path": ["user"],
"hasNext": false
} would be equivalent to this JSON patch: [
{ "op": "add", "path": "/person/firstName", "value": "Rob"},
{ "op": "add", "path": "/person/lastName", "value": "Richard"},
{ "op": "add", "path": "/person/username", "value": "robrichard"},
] Plus we need the additional fields for label, extensions, and errors. While json-patch libraries make it easy to apply patches to the response, many GraphQL clients like Relay and Apollo use a normalized store for data. So incoming patch results would not actually be applied to the previous payload, but rather normalized and inserted into the store. The GraphQL spec already has a definition for referencing response paths, currently used for errors. We are reusing this definition for incremental delivery, so I’d be hesitant to also introduce JSON Pointers into the spec. Client PerformanceAgreed that a flood of payloads can easily overwhelm clients. I think it could be handled client side with debouncing if a large response is potentially expected. For example - Relay supports returning an array of payloads from the network layer. It will process all of the payloads into the store and only trigger one re-render of the React components. |
First, thanks we're having this conversation and being open for feedback. I understand your reasoning behind having hasNext. Making it transport agnostic makes sense. Your arguments on JSON Patch also make a lot of sense. On client side debouncing: On content negotiation: |
On debouncing: On content negotiation: |
Both answers sound reasonable to me. What's your opinion on validation and directive locations? Will there be changes to the spec to make it explicit that @stream is only allowed on list fields? |
@jensneuse I’m not sure what’s involved in adding a new directive location. I’ll look into it to see if it’s feasible. If not, I’ll definitely make sure the spec notes an explicit validation rule that forbids |
I've put some more thought to the topic. Referring to this: https://github.com/graphql/graphql-spec/blob/master/rfcs/DeferStream.md#type-generation Let's have a look at this example Query: {
viewer {
id
friends(first: 2) @stream(initialCount: 1, label: "friendStream") {
id
}
}
...GroupAdminFragment @defer(label: "groupAdminDefer")
}
fragment GroupAdminFragment {
managed_groups {
id
}
} The label is used for the client to figure out what parts of the query are still loading. Second, I'm thinking about a good user experience when generating client code. interface Response {
viewer: {
id: number,
friends: {
id: number,
}[]
},
managed_groups?: {
id: number
}
} From this model, it's not possible to distinguish between null, undefined or loading. I'm thinking of typesafe languages like Java, Go, etc.. It feels like you'd have to make a choice: Either the resulting model is confusing or you'd have to abandon existing tooling for JSON parsing and build custom models. You could then partially parse the JSON and build the model. Depending on the "labels" you'd be able to set loading state for parts of the model. Once more data arrives you can continue updating the Model and replacing the "loading" objects. Wouldn't it be simpler if the JSON could be just parsed into the model which supports something similar to "loading-unions"? One example how this could look like in typescript: interface Response {
viewer: {
id: number,
friends: {
id: number,
}[]
},
managed_groups?: {
__state: 'done'
id: number
} | { __state: 'defer' }
} I'm not yet sure if this is a good idea. This JSON can be parsed into the model with a standard JSON parser. Typescript supports switching on __state via union type which makes the dev experience pretty good. What do people think about this? |
@jensneuse You are correct that a fragment name, plus the path could be used to uniquely identify payloads for deferred fragment spreads. However, there is the possibility for ambiguity with {
viewer {
id
friends(first: 2) @stream(initialCount: 0, label: "friendStream") {
id
...@defer(label: "friendNameDefer") {
name
}
}
}
} In this case there will be two payloads with a path of In regards to loading unions, I think different clients will want to represent the loading states in different ways. In Relay, fragments are tied to UI components and separate types are generated for each fragment. Deferred fragments cause the associated component to handle loading states using React's suspense feature, so a type defined loading union is not necessary. Another example is the initial implementation of defer in Apollo, which passed metadata to components to indicate in-flight deferred data. Type generation is definitely something that gets more complex with @defer. Ideally separate types should be generated for the different deferred fragments. These types should convey the guarantee that multiple fields will arrive together when they are in the same deferred fragment. Clients can then use the label field to determine which fragments have been fulfilled to ensure the application is using the correct types. Maybe a client that did want to use this type of unions to indicate loading status could do so at runtime? My biggest concern with baking it into the spec is that it too specifically enourages this pattern rather than allowing clients to take advantage of the loading pattern that is most natural for their platform. |
I have a similar question to @tgriesser. If I understand correctly, if you call But what if you're executing GraphQL in a context where you don't have a transport that supports incremental delivery? It would seem like we need some way to tell graphql-js "I'm only able to handle a single result". Whether that's a new entry point like Alternatively, perhaps I'm thinking, for example, of how we'll support this in Apollo Server, which ships with integrations for a bunch of different http libraries. It's quite likely that we won't be able to implement incremental delivery for every single supported http library all at once; it would be great to be able to allow folks to use My main request is that there be some easy way to get a single response from what you get from |
@glasser - one clarification on this. We noted in the spec that "GraphQL implementations that do not support these directives must not make them available via introspection.". Therefore, any query that contains @defer/@stream sent to a server that does not support these directives should result in an error. Quoting another comment from above on the reasoning:
That said, I do think that there should be an option to opt in or out to @defer & @stream in graphql-js. We did support that via flags on the Schema constructor in a previous version of the PR. I think it requires some more changes to graphql-js to support specified directives that are not always included in the schema. |
@robrichard Ah, I missed that! If it's possible for a server implementation to introspect the schema and fail fast if the schema supports defer and stream, that seems sufficient. What this would mean in practice (assuming defer and stream are part of schemas in graphql@16 by default) is that anybody trying to combine graphql@16 with an Apollo Server transport that doesn't support incremental delivery would get a startup error and they'd have to tell graphql@16 to disable defer and stream. This might be a bit of a harsh experience for upgrades but at least it would communicate clearly to folks that a new cool feature is available but their chosen transport doesn't support it. I might suggest that it would be easiest for the broader ecosystem if |
EDITED... Hi! Thanks again @robrichard, @lilianammmatos, @IvanGoncharov , et al, for all their work on defer, stream and graphql generally. I am trying to upgrade batch execution within I do not believe this is desirable behavior, although I am not 100% sure. See the below test, which passes, which was unexpected -- at least to me. describe('batch execution', () => {
it('should work natively', async () => {
const schema = makeExecutableSchema({
typeDefs: `
type Query {
field1: String
field2: String
}
`,
resolvers: {
Query: {
field1: () => 'test1',
field2: () => 'test2',
},
},
});
const result = await graphql(schema, `{
... on Query @defer {
field1
}
... on Query @defer {
field2
}
}`, undefined, {});
const initialResult = await (result as AsyncIterableIterator<AsyncExecutor>).next();
expect(initialResult).toEqual({
done: false,
value: {
data: {},
hasNext: true,
},
});
let results = await Promise.all([
(result as AsyncIterableIterator<AsyncExecutionResult>).next(),
(result as AsyncIterableIterator<AsyncExecutionResult>).next()
]);
expect(results[0]).toEqual({
done: false,
value: {
data: {
field1: 'test1',
},
hasNext: false,
path: [],
},
});
expect(results[1]).toEqual({
done: false,
value: {
data: {
field1: 'test1',
},
hasNext: false,
path: [],
},
});
results = await Promise.all([
(result as AsyncIterableIterator<AsyncExecutionResult>).next(),
(result as AsyncIterableIterator<AsyncExecutionResult>).next()
]);
expect(results[0]).toEqual({
done: true,
});
expect(results[1]).toEqual({
done: true,
});
});
}); |
The results I expected were the following: describe('batch execution', () => {
it('should work natively', async () => {
const schema = makeExecutableSchema({
typeDefs: `
type Query {
field1: String
field2: String
}
`,
resolvers: {
Query: {
field1: () => 'test1',
field2: () => 'test2',
},
},
});
const result = await graphql(schema, `{
... on Query @defer {
field1
}
... on Query @defer {
field2
}
}`, undefined, {});
const initialResult = await (result as AsyncIterableIterator<AsyncExecutor>).next();
expect(initialResult).toEqual({
done: false,
value: {
data: {},
hasNext: true,
},
});
let results = await Promise.all([
(result as AsyncIterableIterator<AsyncExecutionResult>).next(),
(result as AsyncIterableIterator<AsyncExecutionResult>).next()
]);
expect(results[0]).toEqual({
done: false,
value: {
data: {
field1: 'test1',
},
hasNext: true,
path: [],
},
});
expect(results[1]).toEqual({
done: false,
value: {
data: {
field2: 'test2',
},
hasNext: false,
path: [],
},
});
results = await Promise.all([
(result as AsyncIterableIterator<AsyncExecutionResult>).next(),
(result as AsyncIterableIterator<AsyncExecutionResult>).next()
]);
expect(results[0]).toEqual({
done: true,
});
expect(results[1]).toEqual(undefined);
});
}); |
Prior art for converting a set of promises to an AsyncIterableIterator in promise settle order, https://github.com/repeaterjs/repeater/blob/219a0c8faf2c2768d234ecfe8dd21d455a4a98fe/packages/repeater/src/repeater.ts#L687 @brainkim |
@yaacovCR |
Updated pings! Although I guess unnecessary at this point as you found this anyway. Fix in #2975 does not use repeaters, but as I comment over there, I feel like possibly repeaters might be safer than what graphql-js is doing... I defer to the experts, though, I am very new to all this async stuff... in general, I am a big repeater fan and wish it was part of the language, sigh, awaiting version 4, congrats on the new position! |
Hey folks! @robrichard @IvanGoncharov any chance these PRs can be merged? They are a blocker for stitching support of defer/stream. |
We could use a new experimental release after the merge as well :) |
Codegen-related question/proposal about giving clients some information about what fragments are included or not in the initial response. The RFC makes
If I understand correctly, given the following query: {
hero {
id
...humanDetails @defer
}
}
fragment humanDetails on Human {
name
} The server can either respond with: {
"hero": {
"__typename": "Human",
"id": "1001"
}
} or {
"hero": {
"__typename": "Human",
"id": "1001",
"name": "Luke"
}
} Both are valid responses. In that case, it's not possible to know in advance the shape of the models to use on the client side. The parser will have to try to map the response to either This example is relatively simple but it can get more complicated if multiple type conditions and/or object types come into play. The parser can have to parse each fragment and for each fragment, potentially recurse into any nested objects. If that fails, rewind and try again for the next fragment. This makes generating such code more difficult and also certainly impacts the runtime parsing performance. Would it be possible to inform the client about what to expect in the initial response? That could be done with an extra
And if the fragment is already included:
If a server doesn't support |
Sorry if this question is terrible. I am confused about the benefit / necessity of the hasNext field. Why must the result stream document within the result knowledge about the stream? That seems to be unnecessarily intertwining the result delivery protocol with the result. Can’t each language simply define what it means for a stream to end? |
For instance, each IteratorResult payload in typescript/JavaScript where done is false should carry a value with a hasNext field of true. What additional value is hasNext giving us that done does not provide? |
@yaacovCR it gives clients a standardized & transport independent way to understand when the stream ends. It's possible defer & stream may be used over a transport where there may not be connection close events or |
In terms of the latter you can use the path for that instead of hasNext, as initial result will not have path set and there is requirement for initial result to be sent first in terms of former, I think a termination/separator payload would make more sense instead of the constant hasNexting. I also am having difficulty understanding why the transport should not bear the burden of figuring that out. We are preserving flexibility I guess, but is it really realistic that the transport could not come up with its own separator? |
Also, hasNext is not reliable, from spec: The GraphQL server may determine there are no more values in the event stream after a previous value with hasNext equal to true has been emitted. In this case the last value in the event stream should be a map without data, label, and path entries, and a hasNext entry with a value of false. |
I know this just came up in the WG, but I am afraid I still don't get it -- so sad. Why is hasNext: true necessary for incremental delivery of queries, but not for subscriptions, which each have multiple payloads being delivered over http? @robrichard @mjmahone @michaelstaib -- can't we define this at the transport level? wouldn't that be a better equivalence between defer with queries and subscriptions? |
To store some gained knowledge -- when embedding incremental delivery of results within a subscription stream, if clients had to rely on the resetting of the path to separate results, they would not be informed that a result had ended until the next result begins. So, at least in terms of subscriptions, you need a separator or marker. This -- in theory -- is a subscriptions only issue, but there are existing clients that have transports possibly without separators (looking at you, Relay, that does not rely on a specified transport), and so to maintain a kind of backwards compatibility, even though we are in early days, it makes sense to have a separator or marker. I think instead of the constant hasNexting, we should switch to an My takeaway is that in a future world where all GraphQL servers are incermental delivery capable, we should switch back to many thanks to @robrichard @mjmahone @michaelstaib @brainkim, especially @robrichard for all the cognitive help on this. |
@briankim @robrichard @michaelstaib I believe relevant to discussion at last working group, see #3165, which demonstrates that stream items may be sent "out of order" -- at least in terms of list index, and a naive way of fixing this. I took another look at the spec edits, and --working through it -- it seems that although for Although we could change the implementation to hold onto later (in terms of index) stream records until the earlier ones have been sent, as the above linked PR does, (a) the naïve implementation from the above PR holds off on starting completion of these later stream records until the earlier ones finish completion, which may not be what we want, and (b) we would have to -- at least, I think -- update the algorithm in the spec edits, or at least make the behavior explicit. Another option would be to do the reverse, explicitly modify the spec to drop the expectation that stream records will be sent in order, and leave it to the client to decide what to do. I don't think that makes sense. |
The final option (of course?) is to work on a less-naive implementation that completes stream records as soon as possible, but holds off on sending them until the earlier ones (by index) have been completed. This would probably be best -- in my opinion. I am thinking about a separate map of |
Actually, I changed my mind, I think the existing sometimes unintuitive behavior, with letting the client decide what to do with “out of order” stream items is the best option. If the client receives the indexed item “n” that means that the client can rely on receiving well defined execution results for items 1..n-1. They may contain errors only, but they should arrive, as the underlying iterator has returned successfully and completion has begun. Thoughts? |
Any thoughts on this today from anyone else? I hope we can discuss this asynchronously rather than waiting until next working group. |
Closing this issue now, using github discussions on this repo going forward https://github.com/robrichard/defer-stream-wg/discussions |
For each release of GraphQL.js, we are now publishing an accompanying release containing an experimental implementation of the
@defer
and@stream
directives. We are hoping to get community feedback on these releases before the proposal is accepted into the GraphQL specification.You can use this experimental release of GraphQL.js by adding the following to your project's package.json file.
A similar release of
express-graphql
will be coming soon.I am creating this issue as a placeholder for anyone who has tried out these experimental packages to leave feedback. Please let us know if everything is working as expected or if you run into any issues. Any suggestions for improvements are also welcome.
References:
Feedback received so far
Call
return
on iterable when connection closesgraphql-js
express-graphql
[email protected]
content-length
in payload should not be required, parsers should use boundary insteadgraphql-over-http
specfetch-multipart-graphql
express-graphql
The text was updated successfully, but these errors were encountered: