-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revisit azure queue serialization. #2456
Comments
Of course I support NOT having 2 different types that do the same. I support consolidation. Not sure how I missed the original PR that introduced this duplication. If you only need to support binary and json, then just add a config option to stream provider. If more formats are needed, provide a way to override. |
The initial commit was made to address a bug in the azure queue stream provider. The bug was that the stream provider was serializing the event token and batch container using the fallback serializer. This caused any types that implement the orleans serialization methods and/or are serialized using external serializers to throw exceptions if those types also are not decorated with the SerializableAttribute. If they were decorated with the SerializableAttribute then they too would end up being serialized using the fallback serializer. So - even though the provider was using the serialization manager for serialization, the result was that the payload was always using the fallback serializer for the entire payload -or- serialization exceptions were being thrown for every message. The introduction of the V2 types were not in the initial commit but was added by request in order to maintain backward compatibility. If all of the tests use the V2 classes, then they will function normally. There are a lot of tests around the V2 classes deserializing the original ones. The original tokens shouldn't be used in the V2 class because it'll then use the fallback serializer. |
@Carlm-MS, the V2 classes have not yet been included with an Orleans drop, meaning it should be safe for me to refactor this in a breaking way, assuming that the original versions still work. However, I wanted to check with you to see if you were using this yet, as I don't want to cause your services further churn without discussion. |
Apologies if this isn't entirely on-topic, but it would be handy for me if I'm hoping I can avoid the need to add the hook |
@ReubenBond |
@jason-bragg We aren't using the V2 classes yet so refactoring will not be an issue. |
@Carlm-MS, thanks |
@ReubenBond, is non-static serialization manager slated for 1.4? If it is I'll wait for that to go in before addressing this, if not, I'll follow a similar pattern as we have for DefaultMemoryMessageBodySerializer, so we can at least apply the same solution to both cases. |
@jason-bragg I prefer we wait until after 1.4.0 to merge it, just because we plan to release 1.4.0 quite soon and I want to let it sit a while longer. (The other serialization change (#2611), which obsoletes |
Agreed. Especially given the release cycle of dependent services, we should not be introducing risky changes in 1.4. |
@jason-bragg no worries whatsoever :) |
Resolved via #2658. |
To support external serializers (specifically json) we merged PRs “Modified stream types to not use fallback serializer and allow external #2330“ and “Azure queue stream provider silo to silo serialization bug fix. #2447”.
While these changes are diligent about verifying that default serialization continues to work to ensure backwards compatibility, the complexity introduced by having two versions of the types that can intermix is significant. These changes may be perfectly safe, but reasoning about them for confidence in their current state and maintaining them in the future is non-trivial.
There is also a question as to whether the backwards compatibility pattern necessitates the upgrade of services reading from the queue prior to upgrading services writing to the queue.
I am sufficiently concerned about the above to request a reconsideration of this pattern.
In some other stream providers (MemoryStreamProvider, EventHubStreamProvider) we support application layer overrides of the serialization to allow developers to adapt the stream provider to their own data formats. I am of the opinion that it is safer to introduce a similar solution to the Azure queue stream provider, with the default being the original data formatting, then introduce an alternative serializer that supports the custom serialization needs intended by the pull requests mentioned above. This would strongly protect existing consumers using the azure queue stream provider, allow new consumers to opt into using the external serializer, and enable application developers to introduce their own application specific solutions as needed.
Tagging: @Carlm-MS, @gabikliot
The text was updated successfully, but these errors were encountered: