-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please provide code backward compatibility / path for upgrade #1277
Comments
@dfawley could you please take a look and comment |
Sorry about breaking backward compatibility on
If your project uses some APIs in gRPC that got removed or modified in some release, you will need to deal with a similar problem as this. The difference is that, when changing other APIs, we try to not break backward compatibility. But for So this problem is actually not specific to |
like google, we use bazel/blaze to manage target building/testing. Unlike google, we do not compile go proto code on the fly via genfile (most of our BUILD files are auto-generated via code dependencies analysis tools; checking in the go proto code simplifies the toolchain). vendor does not play nicely with bazel (we have mentioned this issue to the golang core team previously). how does google solve this general issue internally? |
btw, pinning via vendor would not solve the issue either since that still requires all thirdparty packages to upgrade to the latest version at once (this is pretty impractical since we rely on several hundred thirdparty packages ...) |
I guess I didn't answer the real question. Most stable thirdparty packages does a pretty good job of ensuring backward compatibility. If I grab a new snapshot of a non-proto/grpc thirdparty package, things will generally compile. grpc on the other hand intentionally breaks backward compatibility. By defining only SupportPackageIsVersion(N), anything that was generated with SupportPackageIsVersion(N-1) will break unless it's upgraded at the same time. (It's less of an issue for the proto pkg since the versions are more stable) google3 component have to deal with this exact same issue (not sure if the project is still alive since Mike Borrows is now working on brains). When advancing component version, the api is backward compatibility until the older api usage is fully replaced with the new api. |
As you mentioned, google recompiles proto files on the fly, so I understand that those may not be practical for external users. And I think the solution here is probably still vendoring. I'm not sure what the issue about "vendor does not play nicely with bazel" was, but it seems the bazel team is make progress on supporting vendor: |
Hmm, sounds like grpc usage within google is small enough that you can still rewrite everything in a single cl. Imagine grpc have replaced all of stubby3 (or pretend this project was bigtable or something), and your team needs to make api breaking changes. It would be impossible to rewrite everything at once, even with the help of rosie. All I'm saying is that your project would need to solve this issue anyways for google internal usage, so why not help us out. =P We build everything at head for the same reason everything at google is built at head: it ensures that every project within to ecosystem has the latest/greatest fixes. It's significantly harder to ensure every service have all the latest fixes if each service has its own vendor set. Specific to bazel, yes, you can compile vendor-ed packages, but that trashes your build cache and significantly increases compilation time. |
About this specific issue on We try to keep backward compatibility when making API changes. It's a bit hard to do for Again, sorry for the breakage. |
Thanks for looking into this! |
o, in case you need to roll a new version in the future, one way to provide backward compatibility is to add a Version field to ServiceDesc and populate that field as part of codegen (if the field is not populated, the lib can assume it's v4 or whatever). the grpc lib can check the version field and behave differently depending on the version value. just food for thought. |
An update on this: it's not hard to add support for We will get a PR out for this shortly. |
Please answer these questions before submitting your issue.
What version of gRPC are you using?
Upgrading from 3 to 4
What version of Go are you using (
go version
)?1.6 / 1.7 / 1.8
What operating system (Linux, Windows, …) and version?
(Linux) This is a software engineering discipline issue.
What did you do?
If possible, provide a recipe for reproducing the error.
It's impossible to upgrade grpc without upgrading every single package in the repo. While this works for small projects, it's not practical for large projects.
For context, we have several hundred packages using protobuf/grpc. Many of these packages are thirdparty and are not controlled by us (i.e., we cannot dictate the pace of grpc upgrade)
What did you expect to see?
The ability to upgrade packages piecemeal. Ideally, grpc should support rolling upgrades (e.g., SupportPackageIsVersion3 & SupportPackageIsVersion4 at the same time, then 4 & 5, then 5 & 6, etc)
What did you see instead?
grpc.SupportPackageIsVersion* forces all packages to be upgraded at the same time. (I ended up hacking the generated code in order to support piecemeal upgrades)
The text was updated successfully, but these errors were encountered: