Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-23.1: backupccl,kvserver: log failed ExportRequest trace on client and server #104214

Closed

Conversation

blathers-crl[bot]
Copy link

@blathers-crl blathers-crl bot commented Jun 1, 2023

Backport 1/1 commits from #102793 on behalf of @adityamaru.

/cc @cockroachdb/release


This change strives to improve observability around
backups that fail because of timed out ExportRequests.
Currently, there is very little indication of what the request
was doing when the client cancelled the context after
the pre-determined timeout window. With this change we
now log the trace of the ExportRequest that failed. If
the ExportRequest was served locally, then the trace will be
part of the sender's tracing span. However, if the request
was served on a remote node then the sender does not wait
for the server side evaluation to notice the context cancellation.
To work around this, we also print the trace on the server side
if the request encountered a context cancellation and the
associating tracing span is not a noop.

This change also adds a private cluster setting
bulkio.backup.export_request_verbose_tracing that if set to true
will send all backup export requests with verbose tracing
mode.

To debug a backup failing with a timed out export request we
can now:

  • SET CLUSTER SETTING bulkio.backup.export_request_verbose_tracing = true;
  • SET CLUSTER SETTING trace.snapshot.rate = '1m'

Once the backup times out we can look at the logs
for the server side and client side ExportRequest traces
to then understand where we were stuck executing for so long.

Fixes: #86047
Release note: None


Release justification: improving observability into a common cause of escalations

This change strives to improve observability around
backups that fail because of timed out ExportRequests.
Currently, there is very little indication of what the request
was doing when the client cancelled the context after
the pre-determined timeout window. With this change we
now log the trace of the ExportRequest that failed. If
the ExportRequest was served locally, then the trace will be
part of the sender's tracing span. However, if the request
was served on a remote node then the sender does not wait
for the server side evaluation to notice the context cancellation.
To work around this, we also print the trace on the server side
if the request encountered a context cancellation and the
associating tracing span is not a noop.

This change also adds a private cluster setting
`bulkio.backup.export_request_verbose_tracing` that if set to true
will send all backup export requests with verbose tracing
mode.

To debug a backup failing with a timed out export request we
can now:
- SET CLUSTER SETTING bulkio.backup.export_request_verbose_tracing = true;
- SET CLUSTER SETTING trace.snapshot.rate = '1m'

Once the backup times out we can look at the logs
for the server side and client side ExportRequest traces
to then understand where we were stuck executing for so long.

Fixes: #86047
Release note: None
@blathers-crl blathers-crl bot requested review from a team as code owners June 1, 2023 16:48
@blathers-crl blathers-crl bot force-pushed the blathers/backport-release-23.1-102793 branch from 3e07121 to fd26f97 Compare June 1, 2023 16:49
@blathers-crl blathers-crl bot added the blathers-backport This is a backport that Blathers created automatically. label Jun 1, 2023
@blathers-crl blathers-crl bot removed the request for review from a team June 1, 2023 16:49
@blathers-crl blathers-crl bot added the O-robot Originated from a bot. label Jun 1, 2023
@blathers-crl blathers-crl bot requested a review from rhu713 June 1, 2023 16:49
@blathers-crl
Copy link
Author

blathers-crl bot commented Jun 1, 2023

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Patches should only be created for serious issues or test-only changes.
  • Patches should not break backwards-compatibility.
  • Patches should change as little code as possible.
  • Patches should not change on-disk formats or node communication protocols.
  • Patches should not add new functionality.
  • Patches must not add, edit, or otherwise modify cluster versions; or add version gates.
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters.
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.

Add a brief release justification to the body of your PR to justify this backport.

Some other things to consider:

  • What did we do to ensure that a user that doesn’t know & care about this backport, has no idea that it happened?
  • Will this work in a cluster of mixed patch versions? Did we test that?
  • If a user upgrades a patch version, uses this feature, and then downgrades, what happens?

@blathers-crl blathers-crl bot force-pushed the blathers/backport-release-23.1-102793 branch from 70cbfe5 to 8527308 Compare June 1, 2023 16:49
@blathers-crl
Copy link
Author

blathers-crl bot commented Jun 1, 2023

It looks like your PR touches production code but doesn't add or edit any test code. Did you consider adding tests to your PR?

🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf.

@blathers-crl blathers-crl bot requested review from knz and stevendanna June 1, 2023 16:49
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@adityamaru adityamaru removed request for a team, rhu713 and adityamaru June 1, 2023 16:57
Copy link
Contributor

@erikgrinaker erikgrinaker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please hold off on this backport. I'm seeing tons of log spam in roachtests. Will open an issue in a bit.

@erikgrinaker
Copy link
Contributor

#105378

@adityamaru
Copy link
Contributor

adityamaru commented Jul 10, 2023

@erikgrinaker with #105378 fixed are we okay merging this with the log spam change included? This will help debug timing out export requests in CC such as https://github.com/cockroachlabs/support/issues/2452

@adityamaru adityamaru requested a review from erikgrinaker July 10, 2023 19:53
@adityamaru
Copy link
Contributor

Closing in favour of #106611.

@adityamaru adityamaru closed this Jul 11, 2023
@rafiss rafiss deleted the blathers/backport-release-23.1-102793 branch December 11, 2023 17:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blathers-backport This is a backport that Blathers created automatically. O-robot Originated from a bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants