-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core-tracing] Downgrade to "^0.18.0" of opentelemetry/api for compatibility #14699
Conversation
…y as they sort out some versioning issues between 1.0-rc.0 and the rest of the opentelemetry stack. So: - Upgrade everyone to core-tracing-preview.12 - Downgrade to opentelemetry/[email protected] (from 1.0.0-rc.0) - Update centralized rollup config to accomodate the version range. - Bump cache version to 5 (for now).
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run js - keyvault-keys - tests |
Azure Pipelines successfully started running 1 pipeline(s). |
1 similar comment
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run js - service-bus - tests |
/azp run js - storage-blob - tests |
Azure Pipelines successfully started running 1 pipeline(s). |
1 similar comment
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run js - communication-chat - tests |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run js - storage-file-datalake - tests |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp run js - mixedreality-authentication - tests |
Azure Pipelines successfully started running 1 pipeline(s). |
So if I understand it correctly, we moved too optimistically to 1.0rc, and no actual OT implementations support it yet. I am a little surprised that we didn't catch this issue during testing of the 1.0rc PR or before releasing the new core-tracing package. Since previews get pinned and this is fairly disruptive to re-release all of core, I have to ask, is it worth doing this? OpenTelemetry won't work for some period of time, but presumably they will eventually release 1.0 compatible libraries and we will start working again. @ramya-rao-a what do you think? |
@xirzec , @richardpark-msft I have no urgency in releasing this 1.3.0 hotfix for Identity. I'm thinking we can push this to be released on May. I don't want to rush something that alters so many packages. |
I am assuming this is also related to open-telemetry/opentelemetry-js#2055 which in turn does not affect us since we do manual propagation.
I doubt it is the case the OT does not work at all at the moment when the code from the master branch is used. |
And @xirzec, can you confirm if the endless loop being discussed in open-telemetry/opentelemetry-js#2055 affects us or not? |
The loop issue discussed is happening inside an exporter, which perhaps @hectorhdzg is interested in for the monitor exporter, but isn't related to us producing spans. My hypothesis is we aren't affected, but I'm trying to figure out a good repro/test case to validate our current packages on. |
@sadasant , you'll still want to release the 1.3.0 hotfix - the original purpose of that release was to make it so everyone could finally reference the same core-tracing package (ie, this is independent of this particular PR). You should just go with the [email protected], rather than relying on this PR to go out. (in which case you can also just use the release pipeline we had). |
I did run a basic test but my "test" must be unrealistic, although it sounds like this issue isn't necessarily easy to hit.
I'd be perfectly fine is the answer was 'no'. This was a proactive PR and I didn't have a real chance to see if we could ignore this problem or not. :)
Just chiming in that I'm totally fine with this. It's a bit unclear how the authors were planning on patching this, but they already have PRs out that are trying to move to the opentelemetry/pi 1.0.0-rc.0 version for their existing packages. |
@xirzec the infinite loop is actually triggered by the exporter because the actual http calls to Azure Monitor will be generating Spans as well by @opentelemetry/instrumentation-http, but the issue is with different global contexts, SpanProcessors are responsible to set this context before calling the export method https://github.com/open-telemetry/opentelemetry-js/blob/main/packages/opentelemetry-tracing/src/export/BatchSpanProcessor.ts#L147, you must be able to repro using @opentelemetry/instrumentation-http, Azure SDK using OpenTelemetry and any exporter sending the data through Http |
@richardpark-msft packages using core-tracing 1.0.0-preview.11 seem to work fine together. Is that correct? Would merging this PR break packages currently depending 1.0.0-preview.11? |
I did some testing with
It produced the correct spans as expected through zipkin without any trouble. Not sure if |
I did some testing with the following gist: https://gist.github.com/sadasant/47c84fb78a091ee37b62d462c3b59333 It shows that nothing breaks and that the output makes sense. My test worked with: core-tracing ^1.0.0-preview.11, identity 2.0.0-beta.1 (with core-tracing 1.0.0-preview.11) and keyvault-certificates (also 1.0.0-preview.11). When I switch to Identity 1.3.0, with core-tracing 1.0.0-preview.12, the test fails (I'm guessing it means that keyvault-certificate would need to upgrade to 1.0.0-preview.12). |
I tested mixing current master of storage-blob and latest released version of storage-queue in one app (without using tracing). It worked as I expected. They depend on different versions of core-tracing which have the opentelemetry versions pinned |
Posting my findings as well: "@azure/core-tracing": "^1.0.0-preview.11",
"@azure/identity": "^1.2.5",
"@azure/keyvault-keys": "file:../../../keyvault-keys/azure-keyvault-keys-4.2.0-beta.5.tgz",
"@azure/keyvault-secrets": "file:../../azure-keyvault-secrets-4.2.0-beta.4.tgz",
"@opentelemetry/api": "^1.0.0-rc.0",
"@opentelemetry/exporter-zipkin": "^0.18.2",
"@opentelemetry/node": "^0.18.2",
"@opentelemetry/tracing": "^0.18.2",
"@types/node": "^14.14.37",
"dotenv": "^8.2.0",
"typescript": "^4.2.3 I kept keyvault-secrets pinned to master but tried keyvault-keys latest, next, dev, and master to check and no errors without tracing. Tracing seemed to be working well for me with the above deps. |
Thanks @xirzec, @sadasant, @maorleger and @jeremymeng! With the above findings we conclude that we do not need to downgrade opentelemetry/api and can go ahead with the changes in our master branch |
It appears moving to 1.0.0-rc.0 was premature as the remainder of the opentelemetry ecosystem have not yet moved to it. It looks like it's prepped to happen at any time, but relying on that timing means we can't get a consistent set of tracing packages within our own stack, which isn't a good idea.
There is an version matrix here - 0.18.x is the most modern version that is listed as compatible with the latest core-version. So we're a bit ahead and should go back.
As they sort out some versioning/compatibility issues between 1.0-rc.0 and the rest of the opentelemetry stack we will:
NOTE: @xirzec, I am fairly certain we're going to run into the "Two incompatible versions of @azure/core-tracing have been loaded." - this happened last time in the samples and I erroneously bumped
GLOBAL_TRACER_SYMBOL
up in addition to GLOBAL_TRACER_VERSION` to fix it. This happened last time because various packages were pulling stuff in a combination from either npm or from the local tree and thus getting two different core-tracing versions.I believe the only way to fix this is time the releases of the packages so we get 1) core-tracing, then 2) core* and 3) identity out. Then no matter where the packages are pulling their dependencies from they'll get the same core-tracing package.
So: