Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove stripping of step name and replace with substring search #22415

Merged
merged 1 commit into from
Jul 28, 2022

Conversation

AnandInguva
Copy link
Contributor

The strip methods strips out the characters mentioned in it. For the PytorchRunInference metrics, it strips out the p and outputs ytorchruninfernece instead.


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Choose reviewer(s) and mention them in a comment (R: @username).
  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests

See CI.md for more information about GitHub Actions CI.

@AnandInguva
Copy link
Contributor Author

Run Python 3.9 PostCommit

@codecov
Copy link

codecov bot commented Jul 22, 2022

Codecov Report

Merging #22415 (52cbe32) into master (f2f239a) will decrease coverage by 0.00%.
The diff coverage is 0.00%.

@@            Coverage Diff             @@
##           master   #22415      +/-   ##
==========================================
- Coverage   74.17%   74.16%   -0.01%     
==========================================
  Files         706      706              
  Lines       93190    93193       +3     
==========================================
- Hits        69122    69116       -6     
- Misses      22800    22809       +9     
  Partials     1268     1268              
Flag Coverage Δ
python 83.53% <0.00%> (-0.02%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...beam/testing/load_tests/load_test_metrics_utils.py 34.07% <0.00%> (-0.39%) ⬇️
sdks/python/apache_beam/utils/interactive_utils.py 95.12% <0.00%> (-2.44%) ⬇️
sdks/python/apache_beam/runners/direct/executor.py 96.46% <0.00%> (-0.55%) ⬇️
...eam/runners/interactive/interactive_environment.py 91.71% <0.00%> (-0.31%) ⬇️
...hon/apache_beam/runners/worker/bundle_processor.py 93.42% <0.00%> (-0.25%) ⬇️

Help us with your feedback. Take ten seconds to tell us how you rate us.

@github-actions
Copy link
Contributor

Assigning reviewers. If you would like to opt out of this review, comment assign to next reviewer:

R: @TheNeuralBit for label python.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

Copy link
Member

@TheNeuralBit TheNeuralBit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there anywhere we can test this?

@AnandInguva
Copy link
Contributor Author

Is there anywhere we can test this?

I couldn't find any tests related to it. Any suggestions?

@AnandInguva
Copy link
Contributor Author

Run Python 3.9 PostCommit

@TheNeuralBit
Copy link
Member

Is there anywhere we can test this?

I couldn't find any tests related to it. Any suggestions?

How did you detect the issue?

@AnandInguva
Copy link
Contributor Author

AnandInguva commented Jul 27, 2022

Is there anywhere we can test this?

I couldn't find any tests related to it. Any suggestions?

How did you detect the issue?

When I was trying to publish metrics to InfluxDB and BigQuery, the metrics were named like this INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: RunInferencePytorch_ytorchruninference/pardo(_runinferencedofn)_max_inference_batch_latency_micro_secs Value: 1592367

https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow_PR/9/console

@TheNeuralBit
Copy link
Member

Is there anywhere we can test this?

I couldn't find any tests related to it. Any suggestions?

How did you detect the issue?

When I was trying to publish metrics to InfluxDB and BigQuery, the metrics were named like this INFO:apache_beam.testing.load_tests.load_test_metrics_utils:Metric: RunInferencePytorch_ytorchruninference/pardo(_runinferencedofn)_max_inference_batch_latency_micro_secs Value: 1592367

https://ci-beam.apache.org/job/beam_Inference_Python_Benchmarks_Dataflow_PR/9/console

Ah ok, thanks. I suppose this is just testing infrastructure so we don't need to be too picky.

@TheNeuralBit TheNeuralBit merged commit b0b9c68 into apache:master Jul 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants