Replies: 1 comment 11 replies
-
Ok, these are a lot of questions, so for now I'll try to just address them one by one, but since this doesn't quite mention what you're looking to build I can't jump straight into a code sample 😅
There's an
No, each exchange is a continuous stream of operations and returns a continuous stream of results. If you end this stream you'd end all results (which you can't because you're typically not the one subscribing but only transforming / mapping the streams)
I'm not quite sure what this specific question is for because it depends on what you mean by "only for that one operation" 😅 This principle is also shown here: https://formidable.com/open-source/urql/docs/concepts/exchanges/#how-to-avoid-accidentally-dropping-operations
No, as you describe yourself, we use Observable-like streams. In fact, the subscription exchange takes care of a lot of this abstraction. It provides an API that you can provide with a function that receives the operation input and is supposed to return an
It depends. Each |
Beta Was this translation helpful? Give feedback.
-
Hello :)
I'm trying to write a GraphQL websocket API with live query capacities, and then write an Urql exchange to hook to that API. On the client-side, my problems are twofold :
Heavily inspired by this package, the general idea for the client part would be to receive operations from some GraphQL client (Apollo or Urql), and give back a stream of results. One result stream per operation, one result per "live update" on that operation. The server would be stateful and exactly know which client is subscribed to what. One would then have an
invalidate
function (with roughly the same semantic than what you have on the graphcache), and that function would typically be called from mutations, or in database event listeners.I want to do it both as an exercise and for I might need it in the future.
So, I spent the whole day looking at how your exchanges are written, reading the wonka doc, and some parts of the Urql doc. I've read the part on exchanges multiple times in the last days. And I have a hard time really getting to the bottom of it. It all seems logical and simple, but idk, something just doesn't want to click in my head when it comes to steams, observables, observers, forward, etc. It's as if there was something very simple to be understood right there in front of me, and I just can't reach it.
More specifically :
How does the client know which result in the result stream corresponds to which operation sent down the operation stream ?
I understand that
teardown
ops are designed to signal to the exchange chain that there's no more interest in the result of an operation. But I don't understand why it's necessary. Wouldn't unsubscribing from the result stream and not sending anymore operations (from the kind you're no more interested in) down the pipe be enough ? On which exchange doesteardown
act and how so ?Back to my project, what would be the best way to be able to take one operation sent down the exchange pipe and return a stream of results only for that one operation ?
The package I mentioned earlier contains an Urql demo and the author uses the
subscriptionExchange
, withenableAllOperations
set totrue
(here is the core implementation of the client part ; it uses async iterators and then some hack to make it push streams). Is this the only way to have the Urql client to subscribe to multiple results for one operation ?useQuery
oruseSubscription
, is there no other choice than to create a whole new Observable inforwardSubscription
? Doesteardown
have any role to play here ?Any other advice or insight from you would be very very welcome, as I'm very much a beginner when it comes to async streams.
I'm sorry if all this seems like very obvious things, but I fail to make it all click. So if you have some time, I would be pleased to hear from you. Thank you in advance.
Beta Was this translation helpful? Give feedback.
All reactions