-
Notifications
You must be signed in to change notification settings - Fork 781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support missing for contextual deserializer/serializer #357
Comments
I've added a PR to the core that (if accepted) will fix "1" above; to fix "2" requires taking the experimental API |
Is there overhead in using the emulation? |
no, basically; it shapes the API a little differently, mostly - but the emulation layer hides that; the writer API hasn't really been touched yet; the reader API only has 2 methods:
In either case, the emulated layer just stashes the For context: the Google implementation almost always uses emulation in the other direction (i.e. protobuf subscribes to the legacy API, the implementation consumes the emulated contextual API); as a side effect of this, if the PR above gets accepted into the core, that'll mean that if you leave your code alone, regular protobuf will be using the direct non-emulated route, and only my custom bindings would be using the emulated API. |
Based in feedback on the PR, the PR seems unlikely to be merged |
grpc-dotnet should switch to using
|
Ok! I think caching will probably be at a higher level (the entire call context) but that will be in the future. An extra allocation per message isn't the end of the world. |
You and me have very different views on allocations :) Right now I'm on a treasure-hunt for avoidable allocations in the Grpc.Core/Grpc.Core.Api path (the latter of which you should benefit from indirectly); I would do the same for Grpc-Dotnet, but ... it is kinda awkward to work with right now unless I keep separate machines/VMs for preview6 and preview7 - and even then, it isn't the easiest build since the dependency-tree is fast-moving. But in all seriousness; if you can avoid an alloc per call (without bending the code too far out of shape), IMO you should avoid that alloc per call; they add up surprisingly quickly, and can quickly swamp the necessary allocations (i.e. the actual data payloads); on a high-throughput server allocations and GC can quickly dominate, or at least make a server more expensive than it needs to be (which can mean in terms of overhead on a box, or actual hosting costs for cloud/hosted services), even if performance is still acceptable. |
I have plans to remove almost all non-message allocations on the server. I don't want to micro-optimize for the serialization context if code will be thrown away post 3.0. |
I created an issue for reusing the ServerCallContext. If you're interested in improving performance then you're welcome to take a look 😋 |
Right now, the code directly hooks into the legacy
.Serializer
/.Deserializer
, which provides access to theFunc<T, byte[]>
etc, however IMO this violates the API.The primary API is now the contextual serializer, by which I mean:
This has two significant problems:
Marshaller
will not be automatically exposedFor context on 2, see this PR, which does an in-place upgrade on how regular protoc-style protobuf deserializes, using the contextual API to significantly reduce buffer allocations. If grpc-dotnet does not use the contextual API: it will not benefit if/when those changes get merged.
I am aware of #30, but IMO there are two different things:
You can also probably infer from this that I'm keen on using the contextual API, and generally improving this space - presumably looking into a new writer API too.
So, I guess my main point here is: grpc-dotnet should not use the legacy API - it should switch to the contextual API. I can offer some help with that if it would be welcome.
As a caveat / disclosure thing: I will acknowledge the remark:
/// Note: experimental API that can change or be removed without any prior notice.
but... if the API surface changes, consumers need to update anyway. I don't personally see this as a reason not to consume this API.
The text was updated successfully, but these errors were encountered: