-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Integrate custom metrics with new CloudWatch embedded metric #1
Comments
PR implementing Lambda support in new CW Library: awslabs/aws-embedded-metrics-python@a352d40 At a first glance, it looks like it worth bringing as an additional dependency as it handles a few edge cases w/ dimensions, captures Lambda context including trace ID, etc. Pros
Cons
Will put it to test, looking at the Lambda specific implementation a Serializer and a print could do. |
Chatting with @nmoutschen, it's possible to keep a simpler interface like we have with log_metric but change the implementation to keep metrics in state and flush them upon handler return. Initial idea to try possibly next week:
Working example - Gotcha here is that timestamp must be in ms import datetime
import json
def lambda_handler(event, context):
print(json.dumps({
"_aws": {
"CloudWatchMetrics": [
{
"Namespace": "Test/CustomMetrics",
"Dimensions": [["functionVersion"]],
"Metrics": [
{
"Name": "time",
"Unit": "Milliseconds"
}
],
}
],
"Timestamp": int(datetime.datetime.now().timestamp()*1000)
},
"functionVersion": context.function_version,
"time": 100,
"requestId": context.aws_request_id
})) |
Another discovery that I overlooked at first - Dimensions are shared across all metrics within the array to be flushed. That could easily bring costs up if one is not aware of that since each dimension creates a new metric. |
First implementation -> https://gist.github.com/heitorlessa/5c918d35073bc4c7223de7ffdcc18735 I'll port to the Serverless Airline project first that still uses the original version of the powertools. Then, I'll create a PR and fully document it. Leaving it here in case someone needs it |
* feat: initial working skeleton Signed-off-by: heitorlessa <[email protected]> * feat: use global lazy import for intellisense * fix: default lazy provider * chore: trigger CI #1 * chore: trigger CI #2 * chore: uncaught linting * feat: add minimum generic interface for Tracing Provider and Segment * fix: type hints * refactor: use JSON Schema as dict to reduce I/O latency * docs: changelog * test: add perf tests for import * test: adjust perf bar to flaky/CI machines * fix(pytest): enforce coverage upon request only Signed-off-by: heitorlessa <[email protected]> * chore: address PR's review * chore: correctly redistribute apache 2.0 unmodified code * chore: test labeler * refactor: lazy load fastjsonschema to prevent unnecessary http.client sessions
Currently, we require an additional log processing stack to create custom metrics. Amazon CloudWatch just released support for high cardinality and custom metrics with a new metric format that we could integrate to.
Tasks:
The text was updated successfully, but these errors were encountered: