From 7980860dfbd05f8f88a2062dee268de558328259 Mon Sep 17 00:00:00 2001 From: heitorlessa Date: Mon, 21 Sep 2020 10:52:10 +0200 Subject: [PATCH 1/6] improv: increase quotes size Signed-off-by: heitorlessa --- docs/src/styles/global.css | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/src/styles/global.css b/docs/src/styles/global.css index deeda92fac5..fb8ce5a54e6 100644 --- a/docs/src/styles/global.css +++ b/docs/src/styles/global.css @@ -25,3 +25,7 @@ tr > td { .token.property { color: darkmagenta !important } + +blockquote { + font-size: 1.15em +} From 10686b00857e4adc11809bece95ab5e63d2476ed Mon Sep 17 00:00:00 2001 From: heitorlessa Date: Mon, 21 Sep 2020 10:53:43 +0200 Subject: [PATCH 2/6] improv: add fixtures to aid tests #161 Signed-off-by: heitorlessa --- docs/content/core/metrics.mdx | 30 ++++++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) diff --git a/docs/content/core/metrics.mdx b/docs/content/core/metrics.mdx index be1b9feaa5b..9c341dff1bb 100644 --- a/docs/content/core/metrics.mdx +++ b/docs/content/core/metrics.mdx @@ -251,11 +251,37 @@ This has the advantage of keeping cold start metric separate from your applicati ## Testing your code +### Environment variables + Use `POWERTOOLS_METRICS_NAMESPACE` and `POWERTOOLS_SERVICE_NAME` env vars when unit testing your code to ensure metric namespace and dimension objects are created, and your code doesn't fail validation. ```bash:title=pytest_metric_namespace.sh - POWERTOOLS_SERVICE_NAME="Example" POWERTOOLS_METRICS_NAMESPACE="Application" python -m pytest ``` -You can ignore this if you are explicitly setting namespace/default dimension by passing the `namespace` and `service` parameters when initializing Metrics: `metrics = Metrics(namespace=ApplicationName, service=ServiceName)`. +If you prefer setting environment variable for specific tests, and are using Pytest, you can use [monkeypatch](https://docs.pytest.org/en/latest/monkeypatch.html) fixture: + +```python:title=pytest_env_var.py +def test_namespace_env_var(monkeypatch): + # Set POWERTOOLS_METRICS_NAMESPACE before initializating Metrics + monkeypatch.setenv("POWERTOOLS_METRICS_NAMESPACE", namespace) + + metrics = Metrics() + ... +``` + +> Ignore this, if you are explicitly setting namespace/default dimension via `namespace` and `service` parameters: `metrics = Metrics(namespace=ApplicationName, service=ServiceName)` + +### Clearing metrics + +`Metrics` keep metrics in memory across multiple instances. If you need to test this behaviour, you can use the following Pytest fixture to ensure metrics are reset incl. cold start: + +```python:title=pytest_metrics_reset_fixture.py +@pytest.fixture(scope="function", autouse=True) +def reset_metric_set(): + # Clear out every metric data prior to every test + metrics = Metrics() + metrics.clear_metrics() + metrics_global.is_cold_start = True # ensure each test has cold start + yield +``` From 286bf92958fc304f0d5e9463a06d041cedf638ae Mon Sep 17 00:00:00 2001 From: heitorlessa Date: Tue, 22 Sep 2020 15:34:45 +0200 Subject: [PATCH 3/6] improv: increase width for content --- docs/src/gatsby-theme-apollo-core/components/flex-wrapper.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/src/gatsby-theme-apollo-core/components/flex-wrapper.js b/docs/src/gatsby-theme-apollo-core/components/flex-wrapper.js index 94e1cd42e24..773b6751d0c 100644 --- a/docs/src/gatsby-theme-apollo-core/components/flex-wrapper.js +++ b/docs/src/gatsby-theme-apollo-core/components/flex-wrapper.js @@ -3,7 +3,7 @@ import styled from '@emotion/styled'; const FlexWrapper = styled.div({ display: 'flex', minHeight: '100vh', - maxWidth: 1600, + maxWidth: '87vw', margin: '0 auto' }); From 4090c8cd35831198fd183748084d6877e3f4da9c Mon Sep 17 00:00:00 2001 From: heitorlessa Date: Tue, 22 Sep 2020 15:39:12 +0200 Subject: [PATCH 4/6] improv: log sampling wording Signed-off-by: heitorlessa --- docs/content/core/logger.mdx | 32 +++++++++++++++++++++++++------- 1 file changed, 25 insertions(+), 7 deletions(-) diff --git a/docs/content/core/logger.mdx b/docs/content/core/logger.mdx index 0f8bb7fa9b9..ab12c2223cd 100644 --- a/docs/content/core/logger.mdx +++ b/docs/content/core/logger.mdx @@ -222,21 +222,24 @@ If you ever forget to use `child` param, we will return an existing `Logger` wit ## Sampling debug logs -You can dynamically set a percentage of your logs to **DEBUG** level using `sample_rate` param or via env var `POWERTOOLS_LOGGER_SAMPLE_RATE`. +Sampling allows you to set your Logger Log Level as DEBUG based on a percentage of your concurrent/cold start invocations. You can set a sampling value of `0.0` to `1` (100%) using either `sample_rate` parameter or `POWERTOOLS_LOGGER_SAMPLE_RATE` env var. -Sampling calculation happens at the Logger class initialization. This means, when configured it, sampling it's more likely to happen during concurrent requests, or infrequent invocations as [new Lambda execution contexts are created](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-context.html), not reused. +This is useful when you want to troubleshoot an issue, say a sudden increase in concurrency, and you might not have enough information in your logs as Logger log level was understandably set as INFO. + +Sampling calculation happens at the Logger class initialization. This means sampling will unlikely if you have a steady low number of invocations, depending on what you configured it. - If you want this logic to happen on every invocation regardless whether Lambda reuses the execution environment or not, then create your Logger inside your Lambda handler. + If you want Logger to calculate sampling on every invocation, then please open a feature request.
```python:title=collect.py from aws_lambda_powertools import Logger # Sample 10% of debug logs e.g. 0.1 -logger = Logger(sample_rate=0.1) # highlight-line +logger = Logger(sample_rate=0.1, level="INFO") # highlight-line def handler(event, context): + logger.debug("Verifying whether order_id is present") if "order_id" in event: logger.info("Collecting payment") ... @@ -245,7 +248,21 @@ def handler(event, context):
Excerpt output in CloudWatch Logs -```json:title=cloudwatch_logs.json +```json:title=sampled_log_request_as_debug.json +{ + "timestamp": "2020-05-24 18:17:33,774", + "level": "DEBUG", // highlight-line + "location": "collect.handler:1", + "service": "payment", + "lambda_function_name": "test", + "lambda_function_memory_size": 128, + "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", + "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", + "cold_start": true, + "sampling_rate": 0.1, // highlight-line + "message": "Verifying whether order_id is present" +} + { "timestamp": "2020-05-24 18:17:33,774", "level": "INFO", @@ -260,6 +277,7 @@ def handler(event, context): "message": "Collecting payment" } ``` +
@@ -305,7 +323,7 @@ This can be fixed by either ensuring both has the `service` value as `payment`, You might want to continue to use the same date formatting style, or override `location` to display the `package.function_name:line_number` as you previously had. -Logger allows you to either change the format or suppress the following keys altogether at the initialization: `location`, `timestamp`, `level`, and `datefmt` +Logger allows you to either change the format or suppress the following keys altogether at the initialization: `location`, `timestamp`, `level`, `xray_trace_id`, and `datefmt` ```python from aws_lambda_powertools import Logger @@ -317,7 +335,7 @@ logger = Logger(stream=stdout, location="[%(funcName)s] %(module)s", datefmt="fa logger = Logger(stream=stdout, location=None) # highlight-line ``` -Alternatively, you can also change the order of the following log record keys via the `log_record_order` parameter: `level`, `location`, `message`, and `timestamp` +Alternatively, you can also change the order of the following log record keys via the `log_record_order` parameter: `level`, `location`, `message`, `xray_trace_id`, and `timestamp` ```python from aws_lambda_powertools import Logger From fd19e737d6d13a3e0219da8be68b5a1a46e17210 Mon Sep 17 00:00:00 2001 From: heitorlessa Date: Tue, 22 Sep 2020 15:50:37 +0200 Subject: [PATCH 5/6] improv: add testing section to Logger #161 Signed-off-by: heitorlessa --- docs/content/core/logger.mdx | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/docs/content/core/logger.mdx b/docs/content/core/logger.mdx index ab12c2223cd..ab92436221a 100644 --- a/docs/content/core/logger.mdx +++ b/docs/content/core/logger.mdx @@ -376,3 +376,27 @@ except Exception: } ``` + + +## Testing your code + +When unit testing your code that makes use of `inject_lambda_context` decorator, you need to pass a dummy Lambda Context, or else Logger will fail. + +This is a Pytest sample that provides the minimum information necessary for Logger to succeed: + +```python:title=fake_lambda_context_for_logger.py +@pytest.fixture +def lambda_context(): + lambda_context = { + "function_name": "test", + "memory_limit_in_mb": 128, + "invoked_function_arn": "arn:aws:lambda:eu-west-1:809313241:function:test", + "aws_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", + } + + return namedtuple("LambdaContext", lambda_context.keys())(*lambda_context.values()) + +def test_lambda_handler(lambda_handler, lambda_context): + test_event = {'test': 'event'} + lambda_handler(test_event, lambda_context) # this will now have a Context object populated +``` From 0ac09d606f558fec7e8846cce8ede2f24f8d5383 Mon Sep 17 00:00:00 2001 From: Heitor Lessa Date: Tue, 22 Sep 2020 17:31:40 +0200 Subject: [PATCH 6/6] fix: apply Tom's suggestion Co-authored-by: Tom McCarthy --- docs/content/core/logger.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/content/core/logger.mdx b/docs/content/core/logger.mdx index ab92436221a..f6eaba04161 100644 --- a/docs/content/core/logger.mdx +++ b/docs/content/core/logger.mdx @@ -226,7 +226,7 @@ Sampling allows you to set your Logger Log Level as DEBUG based on a percentage This is useful when you want to troubleshoot an issue, say a sudden increase in concurrency, and you might not have enough information in your logs as Logger log level was understandably set as INFO. -Sampling calculation happens at the Logger class initialization. This means sampling will unlikely if you have a steady low number of invocations, depending on what you configured it. +Sampling decision happens at the Logger class initialization, which only happens during a cold start. This means sampling may happen significant more or less than you expect if you have a steady low number of invocations and thus few cold starts. If you want Logger to calculate sampling on every invocation, then please open a feature request.