Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Posting status to a measurement with exception message when exception is thrown out of node #104

Open
dzmitry-moisa opened this issue Nov 9, 2020 · 5 comments

Comments

@dzmitry-moisa
Copy link

dzmitry-moisa commented Nov 9, 2020

The plugin generates a lot of junk when any command generates exception out of node construction. It is creating a measurement with an exception message instead of writing to job or stage measurement.

Steps to reproduce:

  1. Configure connection to InfluxDB
  2. Crete a pipeline job
  3. Generate an exception not wrapping it in node() {...} or stage() {...}
  4. The plugin creates a measurement with the name of exception message
@jeffpearce
Copy link
Contributor

jeffpearce commented Nov 9, 2020

I've been trying to figure out a better way to handle these for awhile. Originally I added the ability to log exceptions outside of the pipeline because we had a couple of scripts that did this, and I wanted to be able to get counts got particular errors. However, in our case, I wish I'd taking the approach that they were badly written scripts, as the exceptions could have happened inside a stage, they just didn't.

Curious what would work for you. It seems clear that it should write to the job measurement, but if an exception happens outside of a stage, I can see a case for not trying to log that.

Option 1: ignore exceptions that happen outside of a stage, and just write the job result
Option 2: use a generic name when writing to the stage measurement, something like "non-stage error"
Option 3: something else?

@dzmitry-moisa
Copy link
Author

I think that Option 2 would be the better way. It could allow to get the satats of such cases and fix them asap.

@jeffpearce
Copy link
Contributor

That's the way I was leaning as well. I'll make that change. Thanks for reporting!

@FCamborda
Copy link

Heyo, has this been already addressed? :)

@UlrichBlunck
Copy link

Hey @jeffpearce ,
we've encountered this error on our monitoring database as well. And since we help collecting data for multiple projects it's kind of hard to give a proper overview over the number of occurring errors.
Having this wrapped in a more easily matchable way would be super helpful! ;) +1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants