Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Protobuf in OpenSearch #6844

Open
saratvemulapalli opened this issue Mar 27, 2023 · 32 comments
Open

[RFC] Protobuf in OpenSearch #6844

saratvemulapalli opened this issue Mar 27, 2023 · 32 comments
Labels
enhancement Enhancement or improvement to existing feature or request RFC Issues requesting major changes Roadmap:Cost/Performance/Scale Project-wide roadmap label

Comments

@saratvemulapalli
Copy link
Member

saratvemulapalli commented Mar 27, 2023

Inspiration

Plugins are very tightly version coupled with OpenSearch #1707 and relaxing them to work for patch versions is still in the works.

While working on extensions #2447 we really wanted to support multiple versions (including major/minor/patch) of OpenSearch with one OpenSearch SDK[1].

Proposal

Exploring for opensource solutions, Protobuf[2] which is built by Google and is widely adopted for serializing/de-serializing and used as RPC. It was built out of the box to support forward and backward compatibility seamlessly.

With an initial experiment of integrating protobuf in OpenSearch/Extensions opensearch-project/opensearch-sdk-java#414 (comment), we see:

  • an extension working with multiple versions of OpenSearch (Backward and forward compatibility)
  • Simple human readable message contracts from .proto definitions.
  • Generated classes for readers and writers in any language of choice, an important factor for offering OpenSearch SDK in different languages.

For extensions, protobuf solves a lot of problems but has a tiny overhead for serialization/de-serialization over existing OpenSearch's StreamInput StreamOutput

Next Steps

With the learnings we have seen in SDK/Extensions, there is more potential for Protobuf integration in OpenSearch and would like to propose offering Protobuf as a new type:

  • Transport Layer: Implement StreamInput, StreamOutput with protobuf serializer/de-serializers. This will help offer another type within the transport ecosystem similar to ByteBufferStreamInput[3] etc.
    This would seamlessly plugin into Writable[4] interface which is used across the repo for transporting custom messages.

Adding in transport will enable communication between OpenSearch nodes to have significant benefits in performance and seamless versioning compatibility.
@nknize already started making changes to enable this with restructuring XContent #6470

  • Rest Layer: Implement new XContent.Type to add protobuf as an option. Historically converting Json <-> Protobuf has performance implications but for transporting on the Rest Layer with clients, OpenSearch Dashboards and ingestion tools might have benefit when talking over binary format. (Yet to experiment)

Additionally, having protobuf at Rest layer will unblock OpenSearch to support gRPC (if we choose this path).

FAQ

Q Is Protobuf higher performant?
A. We moved 2 APIs a. Cat Nodes b. _search, both APIs with protobuf had atleast 20% better performance compared to native protocol, and we see linear improvements with increase in cluster size.

Q. What are the benchmark numbers for search ?
A. See OpenSearch benchmark results for querying with Protobuf : #10684 (comment)

Q. What are the benchmark numbers for Cat Nodes (Operational APIs)
A. See benchmarking results : #6844 (comment)

Q. Is Protobuf in OpenSearch necessary to support GRPC
A. Protobuf works at transport layer, while GRPC is a layer 7 protocol. GRPC internally uses protobuf as transport which makes it a dependency. We presume there will be significant performance benefits with GRPC as data would be transmitted binary instead of JSON.

cc: @VachaShah @prudhvigodithi

[1] https://github.com/opensearch-project/opensearch-sdk-java
[2] https://protobuf.dev/overview/
[3] https://github.com/opensearch-project/OpenSearch/blob/main/server/src/main/java/org/opensearch/common/io/stream/ByteBufferStreamInput.java
[4] https://github.com/opensearch-project/OpenSearch/blob/main/server/src/main/java/org/opensearch/common/io/stream/Writeable.java

@saratvemulapalli saratvemulapalli added enhancement Enhancement or improvement to existing feature or request untriaged RFC Issues requesting major changes labels Mar 27, 2023
@nknize
Copy link
Collaborator

nknize commented Mar 27, 2023

Super excited about this!!! It's non-invasive which is fantastic! Enables us to further refactor the transport layer for extensions to support this.

This has my full endorsement! Nice work!

@reta
Copy link
Collaborator

reta commented Mar 27, 2023

@saratvemulapalli thanks for the RFC, how beneficial is gRPC for extensions actually? I am not against it but I have doubts we have a compelling use case now for it:

  • yes, we could use gRPC as the transport between core and extensions, but it looks like to be limited to just a few request / response pairs (at least for now), is it worth it?
  • extensions are pretty much "do whatever you want", but our clients only support HTTP right now [1], we would force people to do gRPC and HTTP (REST actions) at the same time, looks like overhead?

The unknown part for me is the role of security plugin, does it mean it has to support gRPC as well to perform any meaningful checks in gRPC world?

[1] opensearch-project/opensearch-clients#55

@dblock
Copy link
Member

dblock commented Mar 27, 2023

Assuming extensions get very chatty transport improvements will be increasingly useful, but I am reading this proposal as 1) make transport pluggable today in OpenSearch, and 2) implement gRPC as an option. I think this has the potential to significantly improve node-to-node communication today passing around all kinds of cluster state. Let's see by how much!

@saratvemulapalli
Copy link
Member Author

thanks @reta for taking a look.

how beneficial is gRPC for extensions actually? I am not against it but I have doubts we have a compelling use case now for it:

I dont know how much gRPC will help extensions (yet, we'll learn more) but with protobuf I've listed down the benefits we see so far for extensions. Infact extensions will be similar to clients for Rest APIs as they use clients internally.

extensions are pretty much "do whatever you want", but our clients only support HTTP right now [1], we would force people to do gRPC and HTTP (REST actions) at the same time, looks like overhead?

We definitely don't want to force using gRPC, as an example with clients. It should be just another option which we will offer in addition to Json based Rest APIs.
I am hoping if we could transmit information in binary (protobuf) format, it would help in performance. Obviously we have to write seamless serializers/de-serializers for each API to translate this data into Java objects for OpenSearch APIs to understand them.

Mostly with the RFC, I am looking for feedback to see if there is something fundamental I am missing vs give it a shot and lets get some numbers to decide if this worth chasing in the longer term.

@nknize
Copy link
Collaborator

nknize commented Mar 27, 2023

I am reading this proposal as 1) make transport pluggable today in OpenSearch,

Transport is already pluggable today (e.g., plugins/transport-nio)

...and 2) implement gRPC as an option.

Sort of. We would implement protobuf as an option (just like CBOR, SMILE, YAML, and JSON).

The only difference here is that we would add an additional ProtobufStreamInput extends StreamInput, ProtobufStreamOutput extends StreamOutput concrete implementation (e.g., in o.common.xcontent.protobuf) that overrides the default StreamInput StreamOutput marshall/unmarshall logic w/ the Protobuf implementation and the PROTOBUF transport option would use this instead of the default StreamInput/StreamOutput.

This is why I've been pushing the XContent refactor. The next PR will refactor the StreamInput/StreamOutput classes into a library and the xcontent/protobuf implementation can provide its concrete implementation that uses its own serialize logic. It's an elegant approach that gets us even further toward modularizing massive amounts of code out of :server into separable libraries (thus moving closer toward jigsaw modularization).

I dont know how much gRPC will help extensions...

Exactly. Let's keep this simple and focus just on protobuf as an optional transport (that as a bonus supports versioning!) than expanding the aperture to say it's a silver bullet for extensions (progress not perfection).

In a followup (after vetting and going GA) we can discuss protobuf as the default transport format over JSON. I think it's clearly a direction worth considering.

@reta
Copy link
Collaborator

reta commented Mar 28, 2023

Thanks a lot @saratvemulapalli , I was a bit confused by gRPC mentions but @nknize clearly clarified that we are only talking about serialization mechanism (protobufs), and not the communication protocol change (gRPC, at least for now). Thanks.

@saratvemulapalli
Copy link
Member Author

saratvemulapalli commented Mar 28, 2023

Thanks @reta. I probably took you off mentioning about gRPC (sorry about that) but mostly intended this issue for Protobuf and put in list of opportunities this will enable and one of them is gRPC (if choose to make it happen down the line). I've updated the RFC to clarify this.

Tagging @peternied @cwperks @wbeckler @seanneumann who had thoughts/feedback.

@saratvemulapalli saratvemulapalli changed the title [Draft] Protobuf in OpenSearch [RFC] Protobuf in OpenSearch Mar 28, 2023
@dbwiddis
Copy link
Member

I'm not quite as up-to-speed on gRPC and other options as the more experienced folk up above, but I will say I'm all for protobuf because:

  • generates code in multiple languages. If we design SDK using text-based protobuf files, we can have Java code on OpenSearch side read by <insert your language here> code on SDK. Combine this "spec based" transport call with language clients doing spec-based REST calls and all we have left is to crack the "X-Content" nut which if I read earlier in this thread, is also a possibility.
  • is independent enough that third parties can develop serializations outside of our framework, and just send bytes around via extensions. Our proxy action framework doesn't care what the bytes are. Right now our implementations use the Writeable serialization but that's just 'cause it's available. This could be huge for compression and future support of injestion and/or analysis plugins that are bandwidth limited.

@peternied
Copy link
Member

Great proposal, I'm all for starting as an optional transport layer and we can figure out where it best benefits the project.

@owaiskazi19
Copy link
Member

owaiskazi19 commented Apr 11, 2023

All in for protobuf! Great proposal @saratvemulapalli! In the future it would be great to see JSON response with versioning support for extension points APIs using protobuf (No more of XContent!).

@Bukhtawar
Copy link
Collaborator

Thanks for the proposal.
@nknize @reta @VachaShah
Just catching up have we evaluated ION as an alternative. Do we think we can run into problems with large data. The one benefit I see with ION is that it doesn't mandate a data schema, it's optional

@reta
Copy link
Collaborator

reta commented May 26, 2023

@Bukhtawar I don't recall there was any evaluation being done

@Bukhtawar
Copy link
Collaborator

Another one that I guess should be considered in favour of low garbage https://github.com/OpenHFT/Chronicle-Wire

@saratvemulapalli
Copy link
Member Author

Thanks @Bukhtawar for the feedback.
We haven't really looked at other alternatives as mostly we started to solve seamless cross version support first and saw the opportunity with OpenSearch as a whole. We definitely understand with large data[1] protobuf is not recommended.
@VachaShah is actively working on this and we'll benchmark couple of alternatives including ION. If you have other suggestions let us know.

[1] https://protobuf.dev/programming-guides/techniques/#large-data

@reta
Copy link
Collaborator

reta commented Jun 5, 2023

The ones I have seen quite often in place of protobuf, if it could help, are:

@VachaShah
Copy link
Collaborator

I did a POC (see draft PR #9097 for the POC code changes for changing the API request response de/serialization and between the nodes) with protobuf for _cat/nodes API and got the time per request for the protobuf variant in comparison to the original _cat/nodes API. The following improvements were noted for various clusters:

2 nodes cluster

Average improvement of 16.14%

image image (1)

5 nodes cluster

Average improvement of 18.11%

image (2) image (3)

10 nodes cluster

Average improvement of 21.35%

image (4) image (5)

15 nodes cluster

Average improvement of 31.39%

image (6) image (7)

@dblock
Copy link
Member

dblock commented Oct 2, 2023

This is significant. Can we ship this?

@saratvemulapalli
Copy link
Member Author

Also another great data point from @dbwiddis and @dblock, while they were trying to write https://github.com/opensearch-project/opensearch-sdk-py, all extension interfaces were autogenerated in python and seamlessly could work over transport with OpenSearch de-serializing this data using Protobuf java.

@VachaShah
Copy link
Collaborator

VachaShah commented Oct 2, 2023

This is significant. Can we ship this?

Thank you @dblock! I am working on getting the changes from the POC in #9097 to merge in the repo. @saratvemulapalli and I are also working on getting the numbers for APIs like search.

@reta
Copy link
Collaborator

reta commented Oct 3, 2023

Thank you @dblock! I am working on getting the changes from the POC in #9097 to merge in the repo.

@VachaShah the numbers are very convincing, one thing we should keep in mind (which you definitely know about) is how to support migration from 2.x to 3.x when some nodes will talk old protocol and new ones will use Protobuf (or in general, any new protocol). In continuation to this, it may not be feasible to migrate all transport actions to Protobuf in one shot so even in 3.x we would need to maintain this mix of transport protocols.

@dblock
Copy link
Member

dblock commented Oct 3, 2023

As a strategy, I would 1) support multiple protocols in 2.x in a way where we can migrate actions one-by-one, 2) rip out the existing transport protocol implementation in 3.0 and fully replace it with Protobuf. IMO, we only need an upgrade path in which a 2.x node can do just enough transport protocol to upgrade itself to protobuf and then never look back.

@saratvemulapalli
Copy link
Member Author

saratvemulapalli commented Oct 3, 2023

@reta we might have a way to get inter-operable protocols as protobuf can write and readfrom bytesArray but it be could lots of manual effort to get all actions into the new protocol :/.

@VachaShah
Copy link
Collaborator

It would be worth to see what actions can benefit the most from protobuf and as Sarat mentioned, the 2 protocols can co-exist with each other with some effort so we can make the upgrade scenarios work.

@austintlee
Copy link
Contributor

Is the current effort on this RFC being done on some feature branch that I can follow and take a look at?

@VachaShah
Copy link
Collaborator

Hi @austintlee, currently there is a draft PR #9097 with the POC and the changes from the draft PR would be PRed out incrementally.

@ketanv3
Copy link
Contributor

ketanv3 commented Oct 10, 2023

Great proposal! Do we know what's the reason behind this improvement? Is it due to reduced CPU time during serialization, or reduced network time due to difference in payload size?

@macohen
Copy link
Contributor

macohen commented Oct 10, 2023

Is there already commitment to protobuf without considering alternatives per @reta?

The ones I have seen quite often in place of protobuf, if it could help, are:

Avro/Thrift have been around for quite some time. I'd like to see some comparison of some of these.

@dblock
Copy link
Member

dblock commented Oct 10, 2023

@macohen I think the idea is that we 1) make the transport protocol pluggable, 2) ship protobuf support as an experimental feature with an upgrade path, 3) offer other protocols, 4) make the best one the default.

@saratvemulapalli
Copy link
Member Author

Great proposal! Do we know what's the reason behind this improvement? Is it due to reduced CPU time during serialization, or reduced network time due to difference in payload size?

We believe most of the benefits are with data compression, we couldn't really analyze frame graphs due to multiple threads.
Protobuf ended up streaming fewer bytes for the same payload and efficient in serializing/de-serializing data.

@bbarani
Copy link
Member

bbarani commented Feb 6, 2024

@saratvemulapalli @VachaShah can you please confirm if this change can be included in 2.x without breaking existing API? Basically can this change be added in a backward compatible manner in 2.x line?

We are evaluating if this change requires 3.0 release or can be included in 2.x line so need your inputs.

@saratvemulapalli
Copy link
Member Author

@bbarani we do not know yet. Once the changes are pushed to 3.x for experimental feature, we will look at backward compatibility. In theory we do see ways where native transport protocol can move to protobuf, but there are few unknowns with mixed cluster scenarios which we have to dive into.

For now I would rather say it would be a breaking change.

@rursprung
Copy link
Contributor

this sounds very interesting! what i didn't quite gather from this issue is whether you plan on using this only on the transport layer or whether you also see this as an option for the public API in the future (e.g. i'm not sure whether #10684 is only for the distribution of the search requests to the nodes or also for consumers which want to trigger the search request)? especially if this is then hidden behind clients like opensearch-java that'd be nice as you'd get the performance improvement fully transparently :)

(just talking as a nerd here, i currently don't have performance problems caused by the REST/JSON overhead)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Enhancement or improvement to existing feature or request RFC Issues requesting major changes Roadmap:Cost/Performance/Scale Project-wide roadmap label
Projects
Status: New
Status: Todo
Development

No branches or pull requests