You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The way go-jsonrpc works is that if a method has two return values, result & error, it’ll deliver that over http or WebSockets. If a method returns a channel and an error it’ll only deliver that over WebSockets but you get to stream data to the user. We use channels for "subscribe" and "notify" APIs rather than for their streaming nature.
What I want is the ability to stream data to the user on both WebSockets and http. I’d like either a writer or a channel that I can spew to rather than accumulating and then dumping it all in one go.
The actor events / eth logs API is a good example of the problem here. We have a default max of 10,000 events that we’ll deliver to you, as a []*types.ActorEvent for the raw actor events or type EthFilterResult struct { Results []interface{} } for the Eth APIs. We have to go query the database, transform and accumulate those and then eventually return them. It’s a terrible waste of resources but Go makes this so easy to do that you forget how bad it is, for both ttfb and memory consumption and obviously DoS potential. So right now we have something like a "batch ETL" pattern for data-heavy APIs but what I'd really like is a "streaming ETL" pattern. A Writer would work but then you have to deal with serialisation, which is what an RPC library should help you with. So a channel would be better.
I think we should either evolve go-jsonrpc to allow us to be much more efficient with delivering data, or switch it out for another library that already gives us these tools.
The text was updated successfully, but these errors were encountered:
The way go-jsonrpc works is that if a method has two return values, result & error, it’ll deliver that over http or WebSockets. If a method returns a channel and an error it’ll only deliver that over WebSockets but you get to stream data to the user. We use channels for "subscribe" and "notify" APIs rather than for their streaming nature.
What I want is the ability to stream data to the user on both WebSockets and http. I’d like either a writer or a channel that I can spew to rather than accumulating and then dumping it all in one go.
The actor events / eth logs API is a good example of the problem here. We have a default max of 10,000 events that we’ll deliver to you, as a
[]*types.ActorEvent
for the raw actor events ortype EthFilterResult struct { Results []interface{} }
for the Eth APIs. We have to go query the database, transform and accumulate those and then eventually return them. It’s a terrible waste of resources but Go makes this so easy to do that you forget how bad it is, for both ttfb and memory consumption and obviously DoS potential. So right now we have something like a "batch ETL" pattern for data-heavy APIs but what I'd really like is a "streaming ETL" pattern. AWriter
would work but then you have to deal with serialisation, which is what an RPC library should help you with. So a channel would be better.I think we should either evolve go-jsonrpc to allow us to be much more efficient with delivering data, or switch it out for another library that already gives us these tools.
The text was updated successfully, but these errors were encountered: