-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: metadata and dimensions not cleared on publish #1115
Comments
Hi @perpil, thank you for taking the time to open this issue. I have looked into it and I was able to reproduce the behavior described, as well as confirm that the Python version handles this same case differently as you have pointed out. I have created a repo with a full reproduction example, which can be found here: https://github.com/dreamorosi/repro-issue-1115 (link might become dead after this issue is resolved). The function example is pretty contrived, but helps to demonstrate the issue: import {
Metrics,
logMetrics,
MetricUnits,
} from "@aws-lambda-powertools/metrics";
import middy from "@middy/core";
const metrics = new Metrics({ namespace: "someName" });
let requestNumber = 1;
export const handler = middy(async () => {
switch (requestNumber) {
case 1:
metrics.addMetadata("field", "value");
break;
case 2:
metrics.addMetadata("field2", "value2");
break;
case 3:
metrics.addDimension("dim1", "value1");
break;
case 4:
metrics.addDimensions({
dim2: "val2",
});
default:
break;
}
metrics.addMetric("successfulBooking", MetricUnits.Count, 1);
requestNumber++;
return;
}).use(logMetrics(metrics, { throwOnEmptyMetrics: true })); In the function above we keep a count of the request number. We then use this count to conditionally add metadata or dimensions to the metric being emitted. Since we are using the Instead, the metadata and dimensions pile up: I have then tested a similar function implementation, with the Python version of Powertools: from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit
class RequestNo:
def __init__(self):
self.number = 1
def add(self):
self.number = self.number + 1
metrics = Metrics(namespace="someName")
request_number = RequestNo()
@metrics.log_metrics(raise_on_empty_metrics=True)
def lambda_handler(event, context):
metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1)
if request_number.number == 1:
metrics.add_metadata(key="field", value="value")
elif request_number.number == 2:
metrics.add_metadata(key="field2", value="value2");
elif request_number.number == 3:
metrics.add_dimension(name="dim1", value="value1")
elif request_number.number == 4:
metrics.add_dimension(name="dim2", value="value2")
else:
pass
request_number.add()
return In this case the dimensions and metadata were cleared out after each request: This is indeed a bug. We'll try to apply a fix as soon as we have capacity. If anyone would like to contribute a fix, please leave a comment below expressing interest and discussing a potential fix & scope of the PR, before opening one. |
I would like to help with this issue. Could you assign me, please? I understand the problem, but I'm out of context, so I need some time and help to get into it. Thanks. |
Hi @shdq, thanks for taking initiative. I would suggest to move forward as follows:
After that, maybe you could write down in this issue a high level plan of where/what the changes would be and what kind of unit tests you'd want to add to prove that the issue is fixed and avoid future regressions. We can have a discussion starting from there, and of course if you have any question on any of the above please feel free to reach out, we're happy to help. PS: we also have a Discord server (invite link) that you can join to chat in case of questions. |
Hi there! I managed to set up the dev environment. Small feedback: So, I read the docs, looked at the code, got familiar with it, and run the tests. And this is what I suggest. A test for the metadata: In the A test for the dimensions: In the A fix for this is to invoke Stored metrics are flushing using Should I add some more tests for default dimensions, because as I got from the docs, they're persistent across metrics? It seems like this change is not going to affect default dimensions. Looking for your response. |
Hi @shdq, thanks for following up on this and for taking the time to flash out the changes, really appreciate it! The tests for metadata & dimensions look reasonable. Depending on how you'll implement this we might have to do some small tweaks to get full coverage, but we can move forward for now and see once we get there.
Agree with this, I think that's the way to go.
I'd change the current
I'd be more inclined to keep this PR focused to fix the bug, we can address this in a separate one. Once the bug fix is merged, if you're still interested in working on this we can work on another PR. From our side, whenever you are ready, you can move forward with the PR. We'll be happy to review it once it's up :) Thank you again for your time Sergei! About the missing dependency, you're right. One of the PRs currently open is going to fix that once merged. Sorry about that! |
The fix has been released in v1.4.0 |
|
Bug description
When you invoke
publishStoredMetrics
, it clears thestoredMetrics
, but doesn't clear the metadata or dimensions. My understanding is that metadata and dimensions are per request, where defaultDimensions should persist between requests. The python implementation clears metadata and dimensions on publish.Expected Behavior
Request 1:
Log entry for metric has:
field: value
Request 2:
Log entry for metric has:
field2: value2
Current Behavior
Request 1:
Log entry for metric has:
field: value
Request 2:
Log entry for metric has:
field: value
andfield2: value2
Possible Solution
Invoke
clearMetadata()
andclearDimensions()
as part ofpublishStoredMetrics()
Steps to Reproduce
See current behavior
Environment
The text was updated successfully, but these errors were encountered: