diff --git a/.github/workflows/run-e2e-tests.yml b/.github/workflows/run-e2e-tests.yml index 86176968839..ef9305373ac 100644 --- a/.github/workflows/run-e2e-tests.yml +++ b/.github/workflows/run-e2e-tests.yml @@ -28,8 +28,8 @@ jobs: strategy: matrix: # Maintenance: disabled until we discover concurrency lock issue with multiple versions and tmp - # version: ["3.7", "3.8", "3.9"] - version: ["3.7"] + version: ["3.7", "3.8", "3.9"] + # version: ["3.7"] steps: - name: "Checkout" uses: actions/checkout@v3 @@ -41,6 +41,14 @@ jobs: python-version: ${{ matrix.version }} architecture: "x64" cache: "poetry" + - name: Setup Node.js + uses: actions/setup-node@v3 + with: + node-version: "16.12" + - name: Install CDK CLI + run: | + npm install + cdk --version - name: Install dependencies run: make dev - name: Configure AWS credentials diff --git a/.gitignore b/.gitignore index cc01240a405..a69b4eaf618 100644 --- a/.gitignore +++ b/.gitignore @@ -310,3 +310,7 @@ site/ !.github/workflows/lib examples/**/sam/.aws-sam + +cdk.out +# NOTE: different accounts will be used for E2E thus creating unnecessary git clutter +cdk.context.json diff --git a/MAINTAINERS.md b/MAINTAINERS.md index fb94090f762..260f6628aa3 100644 --- a/MAINTAINERS.md +++ b/MAINTAINERS.md @@ -15,8 +15,6 @@ - [Releasing a new version](#releasing-a-new-version) - [Drafting release notes](#drafting-release-notes) - [Run end to end tests](#run-end-to-end-tests) - - [Structure](#structure) - - [Workflow](#workflow) - [Releasing a documentation hotfix](#releasing-a-documentation-hotfix) - [Maintain Overall Health of the Repo](#maintain-overall-health-of-the-repo) - [Manage Roadmap](#manage-roadmap) @@ -30,6 +28,16 @@ - [Is that a bug?](#is-that-a-bug) - [Mentoring contributions](#mentoring-contributions) - [Long running issues or PRs](#long-running-issues-or-prs) +- [E2E framework](#e2e-framework) + - [Structure](#structure) + - [Mechanics](#mechanics) + - [Authoring a new feature E2E test](#authoring-a-new-feature-e2e-test) + - [1. Define infrastructure](#1-define-infrastructure) + - [2. Deploy/Delete infrastructure when tests run](#2-deploydelete-infrastructure-when-tests-run) + - [3. Access stack outputs for E2E tests](#3-access-stack-outputs-for-e2e-tests) + - [Internals](#internals) + - [Test runner parallelization](#test-runner-parallelization) + - [CDK CLI parallelization](#cdk-cli-parallelization) ## Overview @@ -218,18 +226,88 @@ E2E tests are run on every push to `develop` or manually via [run-e2e-tests work To run locally, you need [AWS CDK CLI](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_prerequisites) and an [account bootstrapped](https://docs.aws.amazon.com/cdk/v2/guide/bootstrapping.html) (`cdk bootstrap`). With a default AWS CLI profile configured, or `AWS_PROFILE` environment variable set, run `make e2e tests`. -#### Structure +### Releasing a documentation hotfix + +You can rebuild the latest documentation without a full release via this [GitHub Actions Workflow](https://github.com/awslabs/aws-lambda-powertools-python/actions/workflows/rebuild_latest_docs.yml). Choose `Run workflow`, keep `develop` as the branch, and input the latest Powertools version available. + +This workflow will update both user guide and API documentation. + +### Maintain Overall Health of the Repo + +> TODO: Coordinate renaming `develop` to `main` + +Keep the `develop` branch at production quality at all times. Backport features as needed. Cut release branches and tags to enable future patches. + +### Manage Roadmap + +See [Roadmap section](https://awslabs.github.io/aws-lambda-powertools-python/latest/roadmap/) + +Ensure the repo highlights features that should be elevated to the project roadmap. Be clear about the feature’s status, priority, target version, and whether or not it should be elevated to the roadmap. + +### Add Continuous Integration Checks + +Add integration checks that validate pull requests and pushes to ease the burden on Pull Request reviewers. Continuously revisit areas of improvement to reduce operational burden in all parties involved. + +### Negative Impact on the Project + +Actions that negatively impact the project will be handled by the admins, in coordination with other maintainers, in balance with the urgency of the issue. Examples would be [Code of Conduct](CODE_OF_CONDUCT.md) violations, deliberate harmful or malicious actions, spam, monopolization, and security risks. + +### Becoming a maintainer + +In 2023, we will revisit this. We need to improve our understanding of how other projects are doing, their mechanisms to promote key contributors, and how they interact daily. + +We suspect this process might look similar to the [OpenSearch project](https://github.com/opensearch-project/.github/blob/main/MAINTAINERS.md#becoming-a-maintainer). + +## Common scenarios + +These are recurring ambiguous situations that new and existing maintainers may encounter. They serve as guidance. It is up to each maintainer to follow, adjust, or handle in a different manner as long as [our conduct is consistent](#uphold-code-of-conduct) + +### Contribution is stuck + +A contribution can get stuck often due to lack of bandwidth and language barrier. For bandwidth issues, check whether the author needs help. Make sure you get their permission before pushing code into their existing PR - do not create a new PR unless strictly necessary. + +For language barrier and others, offer a 1:1 chat to get them unblocked. Often times, English might not be their primary language, and writing in public might put them off, or come across not the way they intended to be. + +In other cases, you may have constrained capacity. Use `help wanted` label when you want to signal other maintainers and external contributors that you could use a hand to move it forward. -Our E2E framework relies on pytest fixtures to coordinate infrastructure and test parallelization (see [Workflow](#workflow)). You'll notice multiple `conftest.py`, `infrastructure.py`, and `handlers`. +### Insufficient feedback or information + +When in doubt, use `need-more-information` or `need-customer-feedback` labels to signal more context and feedback are necessary before proceeding. You can also use `revisit-in-3-months` label when you expect it might take a while to gather enough information before you can decide. + +### Crediting contributions + +We credit all contributions as part of each [release note](https://github.com/awslabs/aws-lambda-powertools-python/releases) as an automated process. If you find contributors are missing from the release note you're producing, please add them manually. + +### Is that a bug? + +A bug produces incorrect or unexpected results at runtime that differ from its intended behavior. Bugs must be reproducible. They directly affect customers experience at runtime despite following its recommended usage. -- **`infrastructure`**. Uses CDK to define what a Stack for a given feature should look like. It inherits from `BaseInfrastructure` to handle all boilerplate and deployment logic necessary. -- **`conftest.py`**. Imports and deploys a given feature Infrastructure. Hierarchy matters. Top-level `conftest` deploys stacks only once and blocks I/O across all CPUs. Feature-level `conftest` deploys stacks in parallel, and once complete run all tests in parallel. -- **`handlers`**. Lambda function handlers that will be automatically deployed and exported as PascalCase for later use. +Documentation snippets, use of internal components, or unadvertised functionalities are not considered bugs. + +### Mentoring contributions + +Always favor mentoring issue authors to contribute, unless they're not interested or the implementation is sensitive (_e.g., complexity, time to release, etc._). + +Make use of `help wanted` and `good first issue` to signal additional contributions the community can help. + +### Long running issues or PRs + +Try offering a 1:1 call in the attempt to get to a mutual understanding and clarify areas that maintainers could help. + +In the rare cases where both parties don't have the bandwidth or expertise to continue, it's best to use the `revisit-in-3-months` label. By then, see if it's possible to break the PR or issue in smaller chunks, and eventually close if there is no progress. + +## E2E framework + +### Structure + +Our E2E framework relies on [Pytest fixtures](https://docs.pytest.org/en/6.2.x/fixture.html) to coordinate infrastructure and test parallelization - see [Test Parallelization](#test-runner-parallelization) and [CDK CLI Parallelization](#cdk-cli-parallelization). + +**tests/e2e structure** ```shell . ├── __init__.py -├── conftest.py # deploys Lambda Layer stack +├── conftest.py # builds Lambda Layer once ├── logger │ ├── __init__.py │ ├── conftest.py # deploys LoggerStack @@ -254,112 +332,293 @@ Our E2E framework relies on pytest fixtures to coordinate infrastructure and tes │ ├── infrastructure.py # TracerStack definition │ └── test_tracer.py └── utils - ├── Dockerfile ├── __init__.py ├── data_builder # build_service_name(), build_add_dimensions_input, etc. ├── data_fetcher # get_traces(), get_logs(), get_lambda_response(), etc. - ├── infrastructure.py # base infrastructure like deploy logic, Layer Stack, etc. + ├── infrastructure.py # base infrastructure like deploy logic, etc. ``` -#### Workflow +Where: -We parallelize our end-to-end tests to benefit from speed and isolate Lambda functions to ease assessing side effects (e.g., traces, logs, etc.). The following diagram demonstrates the process we take every time you use `make e2e`: +- **`/infrastructure.py`**. Uses CDK to define the infrastructure a given feature needs. +- **`/handlers/`**. Lambda function handlers to build, deploy, and exposed as stack output in PascalCase (e.g., `BasicHandler`). +- **`utils/`**. Test utilities to build data and fetch AWS data to ease assertion +- **`conftest.py`**. Deploys and deletes a given feature infrastructure. Hierarchy matters: + - **Top-level (`e2e/conftest`)**. Builds Lambda Layer only once and blocks I/O across all CPU workers. + - **Feature-level (`e2e//conftest`)**. Deploys stacks in parallel and make them independent of each other. + +### Mechanics + +Under [`BaseInfrastructure`](https://github.com/awslabs/aws-lambda-powertools-python/blob/develop/tests/e2e/utils/infrastructure.py), we hide the complexity of deployment and delete coordination under `deploy`, `delete`, and `create_lambda_functions` methods. + +This allows us to benefit from test and deployment parallelization, use IDE step-through debugging for a single test, run one, subset, or all tests and only deploy their related infrastructure, without any custom configuration. + +> Class diagram to understand abstraction built when defining a new stack (`LoggerStack`) ```mermaid -graph TD - A[make e2e test] -->Spawn{"Split and group tests
by feature and CPU"} +classDiagram + class InfrastructureProvider { + <> + +deploy() Dict + +delete() + +create_resources() + +create_lambda_functions() Dict~Functions~ + } + + class BaseInfrastructure { + +deploy() Dict + +delete() + +create_lambda_functions() Dict~Functions~ + +add_cfn_output() + } + + class TracerStack { + +create_resources() + } + + class LoggerStack { + +create_resources() + } + + class MetricsStack { + +create_resources() + } + + class EventHandlerStack { + +create_resources() + } + + InfrastructureProvider <|-- BaseInfrastructure : implement + BaseInfrastructure <|-- TracerStack : inherit + BaseInfrastructure <|-- LoggerStack : inherit + BaseInfrastructure <|-- MetricsStack : inherit + BaseInfrastructure <|-- EventHandlerStack : inherit +``` - Spawn -->|Worker0| Worker0_Start["Load tests"] - Spawn -->|Worker1| Worker1_Start["Load tests"] - Spawn -->|WorkerN| WorkerN_Start["Load tests"] +### Authoring a new feature E2E test - Worker0_Start -->|Wait| LambdaLayerStack["Lambda Layer Stack Deployment"] - Worker1_Start -->|Wait| LambdaLayerStack["Lambda Layer Stack Deployment"] - WorkerN_Start -->|Wait| LambdaLayerStack["Lambda Layer Stack Deployment"] +Imagine you're going to create E2E for Event Handler feature for the first time. Keep the following mental model when reading: - LambdaLayerStack -->|Worker0| Worker0_Deploy["Launch feature stack"] - LambdaLayerStack -->|Worker1| Worker1_Deploy["Launch feature stack"] - LambdaLayerStack -->|WorkerN| WorkerN_Deploy["Launch feature stack"] +```mermaid +graph LR + A["1. Define infrastructure"]-->B["2. Deploy/Delete infrastructure"]-->C["3.Access Stack outputs" ] +``` - Worker0_Deploy -->|Worker0| Worker0_Tests["Run tests"] - Worker1_Deploy -->|Worker1| Worker1_Tests["Run tests"] - WorkerN_Deploy -->|WorkerN| WorkerN_Tests["Run tests"] +#### 1. Define infrastructure - Worker0_Tests --> ResultCollection - Worker1_Tests --> ResultCollection - WorkerN_Tests --> ResultCollection +We use CDK as our Infrastructure as Code tool of choice. Before you start using CDK, you'd take the following steps: - ResultCollection{"Wait for workers
Collect test results"} - ResultCollection --> TestEnd["Report results"] - ResultCollection --> DeployEnd["Delete Stacks"] +1. Create `tests/e2e/event_handler/infrastructure.py` file +2. Create a new class `EventHandlerStack` and inherit from `BaseInfrastructure` +3. Override `create_resources` method and define your infrastructure using CDK +4. (Optional) Create a Lambda function under `handlers/alb_handler.py` + +> Excerpt `tests/e2e/event_handler/infrastructure.py` + +```python +class EventHandlerStack(BaseInfrastructure): + def create_resources(self): + functions = self.create_lambda_functions() + + self._create_alb(function=functions["AlbHandler"]) + ... + + def _create_alb(self, function: Function): + vpc = ec2.Vpc.from_lookup( + self.stack, + "VPC", + is_default=True, + region=self.region, + ) + + alb = elbv2.ApplicationLoadBalancer(self.stack, "ALB", vpc=vpc, internet_facing=True) + CfnOutput(self.stack, "ALBDnsName", value=alb.load_balancer_dns_name) + ... ``` -### Releasing a documentation hotfix +> Excerpt `tests/e2e/event_handler/handlers/alb_handler.py` -You can rebuild the latest documentation without a full release via this [GitHub Actions Workflow](https://github.com/awslabs/aws-lambda-powertools-python/actions/workflows/rebuild_latest_docs.yml). Choose `Run workflow`, keep `develop` as the branch, and input the latest Powertools version available. +```python +from aws_lambda_powertools.event_handler import ALBResolver, Response, content_types -This workflow will update both user guide and API documentation. +app = ALBResolver() -### Maintain Overall Health of the Repo -> TODO: Coordinate renaming `develop` to `main` +@app.get("/todos") +def hello(): + return Response( + status_code=200, + content_type=content_types.TEXT_PLAIN, + body="Hello world", + cookies=["CookieMonster", "MonsterCookie"], + headers={"Foo": ["bar", "zbr"]}, + ) -Keep the `develop` branch at production quality at all times. Backport features as needed. Cut release branches and tags to enable future patches. -### Manage Roadmap +def lambda_handler(event, context): + return app.resolve(event, context) +``` -See [Roadmap section](https://awslabs.github.io/aws-lambda-powertools-python/latest/roadmap/) +#### 2. Deploy/Delete infrastructure when tests run -Ensure the repo highlights features that should be elevated to the project roadmap. Be clear about the feature’s status, priority, target version, and whether or not it should be elevated to the roadmap. +We need to create a Pytest fixture for our new feature under `tests/e2e/event_handler/conftest.py`. -### Add Continuous Integration Checks +This will instruct Pytest to deploy our infrastructure when our tests start, and delete it when they complete whether tests are successful or not. Note that this file will not need any modification in the future. -Add integration checks that validate pull requests and pushes to ease the burden on Pull Request reviewers. Continuously revisit areas of improvement to reduce operational burden in all parties involved. +> Excerpt `conftest.py` for Event Handler -### Negative Impact on the Project +```python +import pytest -Actions that negatively impact the project will be handled by the admins, in coordination with other maintainers, in balance with the urgency of the issue. Examples would be [Code of Conduct](CODE_OF_CONDUCT.md) violations, deliberate harmful or malicious actions, spam, monopolization, and security risks. +from tests.e2e.event_handler.infrastructure import EventHandlerStack -### Becoming a maintainer -In 2023, we will revisit this. We need to improve our understanding of how other projects are doing, their mechanisms to promote key contributors, and how they interact daily. +@pytest.fixture(autouse=True, scope="module") +def infrastructure(): + """Setup and teardown logic for E2E test infrastructure -We suspect this process might look similar to the [OpenSearch project](https://github.com/opensearch-project/.github/blob/main/MAINTAINERS.md#becoming-a-maintainer). + Yields + ------ + Dict[str, str] + CloudFormation Outputs from deployed infrastructure + """ + stack = EventHandlerStack() + try: + yield stack.deploy() + finally: + stack.delete() -## Common scenarios +``` -These are recurring ambiguous situations that new and existing maintainers may encounter. They serve as guidance. It is up to each maintainer to follow, adjust, or handle in a different manner as long as [our conduct is consistent](#uphold-code-of-conduct) +#### 3. Access stack outputs for E2E tests -### Contribution is stuck +Within our tests, we should now have access to the `infrastructure` fixture we defined earlier in `tests/e2e/event_handler/conftest.py`. -A contribution can get stuck often due to lack of bandwidth and language barrier. For bandwidth issues, check whether the author needs help. Make sure you get their permission before pushing code into their existing PR - do not create a new PR unless strictly necessary. +We can access any Stack Output using pytest dependency injection. -For language barrier and others, offer a 1:1 chat to get them unblocked. Often times, English might not be their primary language, and writing in public might put them off, or come across not the way they intended to be. +> Excerpt `tests/e2e/event_handler/test_header_serializer.py` -In other cases, you may have constrained capacity. Use `help wanted` label when you want to signal other maintainers and external contributors that you could use a hand to move it forward. +```python +@pytest.fixture +def alb_basic_listener_endpoint(infrastructure: dict) -> str: + dns_name = infrastructure.get("ALBDnsName") + port = infrastructure.get("ALBBasicListenerPort", "") + return f"http://{dns_name}:{port}" -### Insufficient feedback or information -When in doubt, use `need-more-information` or `need-customer-feedback` labels to signal more context and feedback are necessary before proceeding. You can also use `revisit-in-3-months` label when you expect it might take a while to gather enough information before you can decide. +def test_alb_headers_serializer(alb_basic_listener_endpoint): + # GIVEN + url = f"{alb_basic_listener_endpoint}/todos" + ... +``` -### Crediting contributions +### Internals -We credit all contributions as part of each [release note](https://github.com/awslabs/aws-lambda-powertools-python/releases) as an automated process. If you find contributors are missing from the release note you're producing, please add them manually. +#### Test runner parallelization -### Is that a bug? +Besides speed, we parallelize our end-to-end tests to ease asserting async side-effects may take a while per test too, _e.g., traces to become available_. -A bug produces incorrect or unexpected results at runtime that differ from its intended behavior. Bugs must be reproducible. They directly affect customers experience at runtime despite following its recommended usage. +The following diagram demonstrates the process we take every time you use `make e2e` locally or at CI: -Documentation snippets, use of internal components, or unadvertised functionalities are not considered bugs. +```mermaid +graph TD + A[make e2e test] -->Spawn{"Split and group tests
by feature and CPU"} -### Mentoring contributions + Spawn -->|Worker0| Worker0_Start["Load tests"] + Spawn -->|Worker1| Worker1_Start["Load tests"] + Spawn -->|WorkerN| WorkerN_Start["Load tests"] -Always favor mentoring issue authors to contribute, unless they're not interested or the implementation is sensitive (_e.g., complexity, time to release, etc._). + Worker0_Start -->|Wait| LambdaLayer["Lambda Layer build"] + Worker1_Start -->|Wait| LambdaLayer["Lambda Layer build"] + WorkerN_Start -->|Wait| LambdaLayer["Lambda Layer build"] -Make use of `help wanted` and `good first issue` to signal additional contributions the community can help. + LambdaLayer -->|Worker0| Worker0_Deploy["Launch feature stack"] + LambdaLayer -->|Worker1| Worker1_Deploy["Launch feature stack"] + LambdaLayer -->|WorkerN| WorkerN_Deploy["Launch feature stack"] -### Long running issues or PRs + Worker0_Deploy -->|Worker0| Worker0_Tests["Run tests"] + Worker1_Deploy -->|Worker1| Worker1_Tests["Run tests"] + WorkerN_Deploy -->|WorkerN| WorkerN_Tests["Run tests"] -Try offering a 1:1 call in the attempt to get to a mutual understanding and clarify areas that maintainers could help. + Worker0_Tests --> ResultCollection + Worker1_Tests --> ResultCollection + WorkerN_Tests --> ResultCollection -In the rare cases where both parties don't have the bandwidth or expertise to continue, it's best to use the `revisit-in-3-months` label. By then, see if it's possible to break the PR or issue in smaller chunks, and eventually close if there is no progress. + ResultCollection{"Wait for workers
Collect test results"} + ResultCollection --> TestEnd["Report results"] + ResultCollection --> DeployEnd["Delete Stacks"] +``` + +#### CDK CLI parallelization + +For CDK CLI to work with [independent CDK Apps](https://docs.aws.amazon.com/cdk/v2/guide/apps.html), we specify an output directory when synthesizing our stack and deploy from said output directory. + +```mermaid +flowchart TD + subgraph "Deploying distinct CDK Apps" + EventHandlerInfra["Event Handler CDK App"] --> EventHandlerSynth + TracerInfra["Tracer CDK App"] --> TracerSynth + EventHandlerSynth["cdk synth --out cdk.out/event_handler"] --> EventHandlerDeploy["cdk deploy --app cdk.out/event_handler"] + + TracerSynth["cdk synth --out cdk.out/tracer"] --> TracerDeploy["cdk deploy --app cdk.out/tracer"] + end +``` + +We create the typical CDK `app.py` at runtime when tests run, since we know which feature and Python version we're dealing with (locally or at CI). + +> Excerpt `cdk_app_V39.py` for Event Handler created at deploy phase + +```python +from tests.e2e.event_handler.infrastructure import EventHandlerStack +stack = EventHandlerStack() +stack.create_resources() +stack.app.synth() +``` + +When we run E2E tests for a single feature or all of them, our `cdk.out` looks like this: + +```shell +total 8 +drwxr-xr-x 18 lessa staff 576B Sep 6 15:38 event-handler +drwxr-xr-x 3 lessa staff 96B Sep 6 15:08 layer_build +-rw-r--r-- 1 lessa staff 32B Sep 6 15:08 layer_build.diff +drwxr-xr-x 18 lessa staff 576B Sep 6 15:38 logger +drwxr-xr-x 18 lessa staff 576B Sep 6 15:38 metrics +drwxr-xr-x 22 lessa staff 704B Sep 9 10:52 tracer +``` + +```mermaid +classDiagram + class CdkOutDirectory { + feature_name/ + layer_build/ + layer_build.diff + } + + class EventHandler { + manifest.json + stack_outputs.json + cdk_app_V39.py + asset.uuid/ + ... + } + + class StackOutputsJson { + BasicHandlerArn: str + ALBDnsName: str + ... + } + + CdkOutDirectory <|-- EventHandler : feature_name/ + StackOutputsJson <|-- EventHandler +``` + +Where: + +- **``**. Contains CDK Assets, CDK `manifest.json`, our `cdk_app_.py` and `stack_outputs.json` +- **`layer_build`**. Contains our Lambda Layer source code built once, used by all stacks independently +- **`layer_build.diff`**. Contains a hash on whether our source code has changed to speed up further deployments and E2E tests + +Together, all of this allows us to use Pytest like we would for any project, use CDK CLI and its [context methods](https://docs.aws.amazon.com/cdk/v2/guide/context.html#context_methods) (`from_lookup`), and use step-through debugging for a single E2E test without any extra configuration. + +> NOTE: VSCode doesn't support debugging processes spawning sub-processes (like CDK CLI does w/ shell and CDK App). Maybe [this works](https://stackoverflow.com/a/65339352). PyCharm works just fine. diff --git a/aws_lambda_powertools/tracing/tracer.py b/aws_lambda_powertools/tracing/tracer.py index 7053497ae6d..0523d53c41d 100644 --- a/aws_lambda_powertools/tracing/tracer.py +++ b/aws_lambda_powertools/tracing/tracer.py @@ -300,16 +300,6 @@ def handler(event, context): @functools.wraps(lambda_handler) def decorate(event, context, **kwargs): with self.provider.in_subsegment(name=f"## {lambda_handler_name}") as subsegment: - global is_cold_start - logger.debug("Annotating cold start") - subsegment.put_annotation(key="ColdStart", value=is_cold_start) - - if is_cold_start: - is_cold_start = False - - if self.service: - subsegment.put_annotation(key="Service", value=self.service) - try: logger.debug("Calling lambda handler") response = lambda_handler(event, context, **kwargs) @@ -325,7 +315,18 @@ def decorate(event, context, **kwargs): self._add_full_exception_as_metadata( method_name=lambda_handler_name, error=err, subsegment=subsegment, capture_error=capture_error ) + raise + finally: + global is_cold_start + logger.debug("Annotating cold start") + subsegment.put_annotation(key="ColdStart", value=is_cold_start) + + if is_cold_start: + is_cold_start = False + + if self.service: + subsegment.put_annotation(key="Service", value=self.service) return response @@ -672,7 +673,7 @@ def _add_response_as_metadata( if data is None or not capture_response or subsegment is None: return - subsegment.put_metadata(key=f"{method_name} response", value=data, namespace=self._config["service"]) + subsegment.put_metadata(key=f"{method_name} response", value=data, namespace=self.service) def _add_full_exception_as_metadata( self, @@ -697,7 +698,7 @@ def _add_full_exception_as_metadata( if not capture_error: return - subsegment.put_metadata(key=f"{method_name} error", value=error, namespace=self._config["service"]) + subsegment.put_metadata(key=f"{method_name} error", value=error, namespace=self.service) @staticmethod def _disable_tracer_provider(): diff --git a/package-lock.json b/package-lock.json new file mode 100644 index 00000000000..5a72aa1ad10 --- /dev/null +++ b/package-lock.json @@ -0,0 +1,58 @@ +{ + "name": "aws-lambda-powertools-python-e2e", + "version": "1.0.0", + "lockfileVersion": 2, + "requires": true, + "packages": { + "": { + "name": "aws-lambda-powertools-python-e2e", + "version": "1.0.0", + "devDependencies": { + "aws-cdk": "2.40.0" + } + }, + "node_modules/aws-cdk": { + "version": "2.40.0", + "resolved": "https://registry.npmjs.org/aws-cdk/-/aws-cdk-2.40.0.tgz", + "integrity": "sha512-oHacGkLFDELwhpJsZSAhFHWDxIeZW3DgKkwiXlNO81JxNfjcHgPR2rsbh/Gz+n4ErAEzOV6WfuWVMe68zv+iPg==", + "bin": { + "cdk": "bin/cdk" + }, + "engines": { + "node": ">= 14.15.0" + }, + "optionalDependencies": { + "fsevents": "2.3.2" + } + }, + "node_modules/fsevents": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz", + "integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==", + "hasInstallScript": true, + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": "^8.16.0 || ^10.6.0 || >=11.0.0" + } + } + }, + "dependencies": { + "aws-cdk": { + "version": "2.40.0", + "resolved": "https://registry.npmjs.org/aws-cdk/-/aws-cdk-2.40.0.tgz", + "integrity": "sha512-oHacGkLFDELwhpJsZSAhFHWDxIeZW3DgKkwiXlNO81JxNfjcHgPR2rsbh/Gz+n4ErAEzOV6WfuWVMe68zv+iPg==", + "requires": { + "fsevents": "2.3.2" + } + }, + "fsevents": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.2.tgz", + "integrity": "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA==", + "optional": true + } + } +} diff --git a/package.json b/package.json new file mode 100644 index 00000000000..6e3a2c1b216 --- /dev/null +++ b/package.json @@ -0,0 +1,7 @@ +{ + "name": "aws-lambda-powertools-python-e2e", + "version": "1.0.0", + "devDependencies": { + "aws-cdk": "2.40.0" + } +} diff --git a/parallel_run_e2e.py b/parallel_run_e2e.py index b9603701e5e..745f1392f67 100755 --- a/parallel_run_e2e.py +++ b/parallel_run_e2e.py @@ -8,7 +8,6 @@ def main(): workers = len(list(features)) - 1 command = f"poetry run pytest -n {workers} --dist loadfile -o log_cli=true tests/e2e" - print(f"Running E2E tests with: {command}") subprocess.run(command.split(), shell=False) diff --git a/poetry.lock b/poetry.lock index 7d95f6b9f8e..93770dd60cd 100644 --- a/poetry.lock +++ b/poetry.lock @@ -175,6 +175,14 @@ python-versions = ">=3.6.0" [package.extras] unicode_backport = ["unicodedata2"] +[[package]] +name = "checksumdir" +version = "1.2.0" +description = "Compute a single hash of the file contents of a directory." +category = "dev" +optional = false +python-versions = ">=3.6,<4.0" + [[package]] name = "click" version = "8.1.3" @@ -637,7 +645,7 @@ python-versions = ">=3.6" [[package]] name = "mike" -version = "0.6.0" +version = "1.1.2" description = "Manage multiple versions of your MkDocs-powered documentation" category = "dev" optional = false @@ -646,12 +654,12 @@ python-versions = "*" [package.dependencies] jinja2 = "*" mkdocs = ">=1.0" -packaging = "*" -"ruamel.yaml" = "*" +pyyaml = ">=5.1" +verspec = "*" [package.extras] -test = ["flake8 (>=3.0)", "coverage"] -dev = ["pypandoc (>=1.4)", "flake8 (>=3.0)", "coverage"] +test = ["shtab", "flake8 (>=3.0)", "coverage"] +dev = ["shtab", "flake8 (>=3.0)", "coverage"] [[package]] name = "mkdocs" @@ -1198,29 +1206,6 @@ python-versions = "*" decorator = ">=3.4.2" py = ">=1.4.26,<2.0.0" -[[package]] -name = "ruamel.yaml" -version = "0.17.21" -description = "ruamel.yaml is a YAML parser/emitter that supports roundtrip preservation of comments, seq/map flow style, and map key order" -category = "dev" -optional = false -python-versions = ">=3" - -[package.dependencies] -"ruamel.yaml.clib" = {version = ">=0.2.6", markers = "platform_python_implementation == \"CPython\" and python_version < \"3.11\""} - -[package.extras] -docs = ["ryd"] -jinja2 = ["ruamel.yaml.jinja2 (>=0.2)"] - -[[package]] -name = "ruamel.yaml.clib" -version = "0.2.6" -description = "C version of reader, parser and emitter for ruamel.yaml derived from libyaml" -category = "dev" -optional = false -python-versions = ">=3.5" - [[package]] name = "s3transfer" version = "0.6.0" @@ -1331,6 +1316,17 @@ brotli = ["brotlicffi (>=0.8.0)", "brotli (>=1.0.9)", "brotlipy (>=0.6.0)"] secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "urllib3-secure-extra", "ipaddress"] socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"] +[[package]] +name = "verspec" +version = "0.1.0" +description = "Flexible version handling" +category = "dev" +optional = false +python-versions = "*" + +[package.extras] +test = ["pytest", "pretend", "mypy", "flake8 (>=3.7)", "coverage"] + [[package]] name = "watchdog" version = "2.1.9" @@ -1381,7 +1377,7 @@ pydantic = ["pydantic", "email-validator"] [metadata] lock-version = "1.1" python-versions = "^3.7.4" -content-hash = "1500a968030f6adae44497fbb31beaef774fa53f7020ee264a4f5971b38fc597" +content-hash = "0ef937932afc677f409d634770d46aefbc62c1befe060ce1b9fb0e4f263e3ec8" [metadata.files] attrs = [ @@ -1451,6 +1447,10 @@ charset-normalizer = [ {file = "charset-normalizer-2.1.1.tar.gz", hash = "sha256:5a3d016c7c547f69d6f81fb0db9449ce888b418b5b9952cc5e6e66843e9dd845"}, {file = "charset_normalizer-2.1.1-py3-none-any.whl", hash = "sha256:83e9a75d1911279afd89352c68b45348559d1fc0506b054b346651b5e7fee29f"}, ] +checksumdir = [ + {file = "checksumdir-1.2.0-py3-none-any.whl", hash = "sha256:77687e16da95970c94061c74ef2e13666c4b6e0e8c90a5eaf0c8f7591332cf01"}, + {file = "checksumdir-1.2.0.tar.gz", hash = "sha256:10bfd7518da5a14b0e9ac03e9ad105f0e70f58bba52b6e9aa2f21a3f73c7b5a8"}, +] click = [ {file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"}, {file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"}, @@ -1692,8 +1692,8 @@ mergedeep = [ {file = "mergedeep-1.3.4.tar.gz", hash = "sha256:0096d52e9dad9939c3d975a774666af186eda617e6ca84df4c94dec30004f2a8"}, ] mike = [ - {file = "mike-0.6.0-py3-none-any.whl", hash = "sha256:cef9b9c803ff5c3fbb410f51f5ceb00902a9fe16d9fabd93b69c65cf481ab5a1"}, - {file = "mike-0.6.0.tar.gz", hash = "sha256:6d6239de2a60d733da2f34617e9b9a14c4b5437423b47e524f14dc96d6ce5f2f"}, + {file = "mike-1.1.2-py3-none-any.whl", hash = "sha256:4c307c28769834d78df10f834f57f810f04ca27d248f80a75f49c6fa2d1527ca"}, + {file = "mike-1.1.2.tar.gz", hash = "sha256:56c3f1794c2d0b5fdccfa9b9487beb013ca813de2e3ad0744724e9d34d40b77b"}, ] mkdocs = [ {file = "mkdocs-1.4.0-py3-none-any.whl", hash = "sha256:ce057e9992f017b8e1496b591b6c242cbd34c2d406e2f9af6a19b97dd6248faa"}, @@ -2012,42 +2012,6 @@ retry = [ {file = "retry-0.9.2-py2.py3-none-any.whl", hash = "sha256:ccddf89761fa2c726ab29391837d4327f819ea14d244c232a1d24c67a2f98606"}, {file = "retry-0.9.2.tar.gz", hash = "sha256:f8bfa8b99b69c4506d6f5bd3b0aabf77f98cdb17f3c9fc3f5ca820033336fba4"}, ] -"ruamel.yaml" = [ - {file = "ruamel.yaml-0.17.21-py3-none-any.whl", hash = "sha256:742b35d3d665023981bd6d16b3d24248ce5df75fdb4e2924e93a05c1f8b61ca7"}, - {file = "ruamel.yaml-0.17.21.tar.gz", hash = "sha256:8b7ce697a2f212752a35c1ac414471dc16c424c9573be4926b56ff3f5d23b7af"}, -] -"ruamel.yaml.clib" = [ - {file = "ruamel.yaml.clib-0.2.6-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:6e7be2c5bcb297f5b82fee9c665eb2eb7001d1050deaba8471842979293a80b0"}, - {file = "ruamel.yaml.clib-0.2.6-cp310-cp310-manylinux2014_aarch64.whl", hash = "sha256:066f886bc90cc2ce44df8b5f7acfc6a7e2b2e672713f027136464492b0c34d7c"}, - {file = "ruamel.yaml.clib-0.2.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:221eca6f35076c6ae472a531afa1c223b9c29377e62936f61bc8e6e8bdc5f9e7"}, - {file = "ruamel.yaml.clib-0.2.6-cp310-cp310-win32.whl", hash = "sha256:1070ba9dd7f9370d0513d649420c3b362ac2d687fe78c6e888f5b12bf8bc7bee"}, - {file = "ruamel.yaml.clib-0.2.6-cp310-cp310-win_amd64.whl", hash = "sha256:77df077d32921ad46f34816a9a16e6356d8100374579bc35e15bab5d4e9377de"}, - {file = "ruamel.yaml.clib-0.2.6-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:cfdb9389d888c5b74af297e51ce357b800dd844898af9d4a547ffc143fa56751"}, - {file = "ruamel.yaml.clib-0.2.6-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:7b2927e92feb51d830f531de4ccb11b320255ee95e791022555971c466af4527"}, - {file = "ruamel.yaml.clib-0.2.6-cp35-cp35m-win32.whl", hash = "sha256:ada3f400d9923a190ea8b59c8f60680c4ef8a4b0dfae134d2f2ff68429adfab5"}, - {file = "ruamel.yaml.clib-0.2.6-cp35-cp35m-win_amd64.whl", hash = "sha256:de9c6b8a1ba52919ae919f3ae96abb72b994dd0350226e28f3686cb4f142165c"}, - {file = "ruamel.yaml.clib-0.2.6-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d67f273097c368265a7b81e152e07fb90ed395df6e552b9fa858c6d2c9f42502"}, - {file = "ruamel.yaml.clib-0.2.6-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:72a2b8b2ff0a627496aad76f37a652bcef400fd861721744201ef1b45199ab78"}, - {file = "ruamel.yaml.clib-0.2.6-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:d3c620a54748a3d4cf0bcfe623e388407c8e85a4b06b8188e126302bcab93ea8"}, - {file = "ruamel.yaml.clib-0.2.6-cp36-cp36m-win32.whl", hash = "sha256:9efef4aab5353387b07f6b22ace0867032b900d8e91674b5d8ea9150db5cae94"}, - {file = "ruamel.yaml.clib-0.2.6-cp36-cp36m-win_amd64.whl", hash = "sha256:846fc8336443106fe23f9b6d6b8c14a53d38cef9a375149d61f99d78782ea468"}, - {file = "ruamel.yaml.clib-0.2.6-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0847201b767447fc33b9c235780d3aa90357d20dd6108b92be544427bea197dd"}, - {file = "ruamel.yaml.clib-0.2.6-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:78988ed190206672da0f5d50c61afef8f67daa718d614377dcd5e3ed85ab4a99"}, - {file = "ruamel.yaml.clib-0.2.6-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:210c8fcfeff90514b7133010bf14e3bad652c8efde6b20e00c43854bf94fa5a6"}, - {file = "ruamel.yaml.clib-0.2.6-cp37-cp37m-win32.whl", hash = "sha256:a49e0161897901d1ac9c4a79984b8410f450565bbad64dbfcbf76152743a0cdb"}, - {file = "ruamel.yaml.clib-0.2.6-cp37-cp37m-win_amd64.whl", hash = "sha256:bf75d28fa071645c529b5474a550a44686821decebdd00e21127ef1fd566eabe"}, - {file = "ruamel.yaml.clib-0.2.6-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a32f8d81ea0c6173ab1b3da956869114cae53ba1e9f72374032e33ba3118c233"}, - {file = "ruamel.yaml.clib-0.2.6-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7f7ecb53ae6848f959db6ae93bdff1740e651809780822270eab111500842a84"}, - {file = "ruamel.yaml.clib-0.2.6-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:61bc5e5ca632d95925907c569daa559ea194a4d16084ba86084be98ab1cec1c6"}, - {file = "ruamel.yaml.clib-0.2.6-cp38-cp38-win32.whl", hash = "sha256:89221ec6d6026f8ae859c09b9718799fea22c0e8da8b766b0b2c9a9ba2db326b"}, - {file = "ruamel.yaml.clib-0.2.6-cp38-cp38-win_amd64.whl", hash = "sha256:31ea73e564a7b5fbbe8188ab8b334393e06d997914a4e184975348f204790277"}, - {file = "ruamel.yaml.clib-0.2.6-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:dc6a613d6c74eef5a14a214d433d06291526145431c3b964f5e16529b1842bed"}, - {file = "ruamel.yaml.clib-0.2.6-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:1866cf2c284a03b9524a5cc00daca56d80057c5ce3cdc86a52020f4c720856f0"}, - {file = "ruamel.yaml.clib-0.2.6-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:1b4139a6ffbca8ef60fdaf9b33dec05143ba746a6f0ae0f9d11d38239211d335"}, - {file = "ruamel.yaml.clib-0.2.6-cp39-cp39-win32.whl", hash = "sha256:3fb9575a5acd13031c57a62cc7823e5d2ff8bc3835ba4d94b921b4e6ee664104"}, - {file = "ruamel.yaml.clib-0.2.6-cp39-cp39-win_amd64.whl", hash = "sha256:825d5fccef6da42f3c8eccd4281af399f21c02b32d98e113dbc631ea6a6ecbc7"}, - {file = "ruamel.yaml.clib-0.2.6.tar.gz", hash = "sha256:4ff604ce439abb20794f05613c374759ce10e3595d1867764dd1ae675b85acbd"}, -] s3transfer = [ {file = "s3transfer-0.6.0-py3-none-any.whl", hash = "sha256:06176b74f3a15f61f1b4f25a1fc29a4429040b7647133a463da8fa5bd28d5ecd"}, {file = "s3transfer-0.6.0.tar.gz", hash = "sha256:2ed07d3866f523cc561bf4a00fc5535827981b117dd7876f036b0c1aca42c947"}, @@ -2114,6 +2078,10 @@ urllib3 = [ {file = "urllib3-1.26.12-py2.py3-none-any.whl", hash = "sha256:b930dd878d5a8afb066a637fbb35144fe7901e3b209d1cd4f524bd0e9deee997"}, {file = "urllib3-1.26.12.tar.gz", hash = "sha256:3fa96cf423e6987997fc326ae8df396db2a8b7c667747d47ddd8ecba91f4a74e"}, ] +verspec = [ + {file = "verspec-0.1.0-py3-none-any.whl", hash = "sha256:741877d5633cc9464c45a469ae2a31e801e6dbbaa85b9675d481cda100f11c31"}, + {file = "verspec-0.1.0.tar.gz", hash = "sha256:c4504ca697b2056cdb4bfa7121461f5a0e81809255b41c03dda4ba823637c01e"}, +] watchdog = [ {file = "watchdog-2.1.9-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:a735a990a1095f75ca4f36ea2ef2752c99e6ee997c46b0de507ba40a09bf7330"}, {file = "watchdog-2.1.9-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6b17d302850c8d412784d9246cfe8d7e3af6bcd45f958abb2d08a6f8bedf695d"}, diff --git a/pyproject.toml b/pyproject.toml index 74b7bffeb3e..e244a656be3 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -50,7 +50,7 @@ xenon = "^0.9.0" flake8-eradicate = "^1.2.1" flake8-bugbear = "^22.9.23" mkdocs-git-revision-date-plugin = "^0.3.2" -mike = "^0.6.0" +mike = "^1.1.2" mypy = "^0.971" retry = "^0.9.2" pytest-xdist = "^2.5.0" @@ -73,6 +73,7 @@ types-requests = "^2.28.11" typing-extensions = "^4.4.0" mkdocs-material = "^8.5.4" filelock = "^3.8.0" +checksumdir = "^1.2.0" [tool.poetry.extras] pydantic = ["pydantic", "email-validator"] diff --git a/tests/e2e/conftest.py b/tests/e2e/conftest.py index ac55d373e63..f59eea9a33b 100644 --- a/tests/e2e/conftest.py +++ b/tests/e2e/conftest.py @@ -1,21 +1,15 @@ import pytest -from tests.e2e.utils.infrastructure import LambdaLayerStack, deploy_once +from tests.e2e.utils.infrastructure import call_once +from tests.e2e.utils.lambda_layer.powertools_layer import LocalLambdaPowertoolsLayer -@pytest.fixture(scope="session") -def lambda_layer_arn(lambda_layer_deployment): - yield lambda_layer_deployment.get("LayerArn") - - -@pytest.fixture(scope="session") -def lambda_layer_deployment(request: pytest.FixtureRequest, tmp_path_factory: pytest.TempPathFactory, worker_id: str): - """Setup and teardown logic for E2E test infrastructure +@pytest.fixture(scope="session", autouse=True) +def lambda_layer_build(tmp_path_factory: pytest.TempPathFactory, worker_id: str) -> str: + """Build Lambda Layer once before stacks are created Parameters ---------- - request : pytest.FixtureRequest - pytest request fixture to introspect absolute path to test being executed tmp_path_factory : pytest.TempPathFactory pytest temporary path factory to discover shared tmp when multiple CPU processes are spun up worker_id : str @@ -23,13 +17,13 @@ def lambda_layer_deployment(request: pytest.FixtureRequest, tmp_path_factory: py Yields ------ - Dict[str, str] - CloudFormation Outputs from deployed infrastructure + str + Lambda Layer artefact location """ - yield from deploy_once( - stack=LambdaLayerStack, - request=request, + + layer = LocalLambdaPowertoolsLayer() + yield from call_once( + task=layer.build, tmp_path_factory=tmp_path_factory, worker_id=worker_id, - layer_arn="", ) diff --git a/tests/e2e/event_handler/conftest.py b/tests/e2e/event_handler/conftest.py index 207ec443456..43941946ac7 100644 --- a/tests/e2e/event_handler/conftest.py +++ b/tests/e2e/event_handler/conftest.py @@ -1,27 +1,18 @@ -from pathlib import Path - import pytest from tests.e2e.event_handler.infrastructure import EventHandlerStack @pytest.fixture(autouse=True, scope="module") -def infrastructure(request: pytest.FixtureRequest, lambda_layer_arn: str): +def infrastructure(): """Setup and teardown logic for E2E test infrastructure - Parameters - ---------- - request : pytest.FixtureRequest - pytest request fixture to introspect absolute path to test being executed - lambda_layer_arn : str - Lambda Layer ARN - Yields ------ Dict[str, str] CloudFormation Outputs from deployed infrastructure """ - stack = EventHandlerStack(handlers_dir=Path(f"{request.path.parent}/handlers"), layer_arn=lambda_layer_arn) + stack = EventHandlerStack() try: yield stack.deploy() finally: diff --git a/tests/e2e/event_handler/infrastructure.py b/tests/e2e/event_handler/infrastructure.py index 735261138f3..da456038a25 100644 --- a/tests/e2e/event_handler/infrastructure.py +++ b/tests/e2e/event_handler/infrastructure.py @@ -1,4 +1,3 @@ -from pathlib import Path from typing import Dict, Optional from aws_cdk import CfnOutput @@ -14,11 +13,6 @@ class EventHandlerStack(BaseInfrastructure): - FEATURE_NAME = "event-handlers" - - def __init__(self, handlers_dir: Path, feature_name: str = FEATURE_NAME, layer_arn: str = "") -> None: - super().__init__(feature_name, handlers_dir, layer_arn) - def create_resources(self): functions = self.create_lambda_functions() @@ -28,7 +22,12 @@ def create_resources(self): self._create_lambda_function_url(function=functions["LambdaFunctionUrlHandler"]) def _create_alb(self, function: Function): - vpc = ec2.Vpc(self.stack, "EventHandlerVPC", max_azs=2) + vpc = ec2.Vpc.from_lookup( + self.stack, + "VPC", + is_default=True, + region=self.region, + ) alb = elbv2.ApplicationLoadBalancer(self.stack, "ALB", vpc=vpc, internet_facing=True) CfnOutput(self.stack, "ALBDnsName", value=alb.load_balancer_dns_name) diff --git a/tests/e2e/logger/conftest.py b/tests/e2e/logger/conftest.py index 82a89314258..a31be77031b 100644 --- a/tests/e2e/logger/conftest.py +++ b/tests/e2e/logger/conftest.py @@ -1,27 +1,18 @@ -from pathlib import Path - import pytest from tests.e2e.logger.infrastructure import LoggerStack @pytest.fixture(autouse=True, scope="module") -def infrastructure(request: pytest.FixtureRequest, lambda_layer_arn: str): +def infrastructure(tmp_path_factory, worker_id): """Setup and teardown logic for E2E test infrastructure - Parameters - ---------- - request : pytest.FixtureRequest - pytest request fixture to introspect absolute path to test being executed - lambda_layer_arn : str - Lambda Layer ARN - Yields ------ Dict[str, str] CloudFormation Outputs from deployed infrastructure """ - stack = LoggerStack(handlers_dir=Path(f"{request.path.parent}/handlers"), layer_arn=lambda_layer_arn) + stack = LoggerStack() try: yield stack.deploy() finally: diff --git a/tests/e2e/logger/infrastructure.py b/tests/e2e/logger/infrastructure.py index 68aaa8eb38a..242b3c10892 100644 --- a/tests/e2e/logger/infrastructure.py +++ b/tests/e2e/logger/infrastructure.py @@ -1,13 +1,6 @@ -from pathlib import Path - from tests.e2e.utils.infrastructure import BaseInfrastructure class LoggerStack(BaseInfrastructure): - FEATURE_NAME = "logger" - - def __init__(self, handlers_dir: Path, feature_name: str = FEATURE_NAME, layer_arn: str = "") -> None: - super().__init__(feature_name, handlers_dir, layer_arn) - def create_resources(self): self.create_lambda_functions() diff --git a/tests/e2e/metrics/conftest.py b/tests/e2e/metrics/conftest.py index 663c8845be4..2f72e7950be 100644 --- a/tests/e2e/metrics/conftest.py +++ b/tests/e2e/metrics/conftest.py @@ -1,27 +1,18 @@ -from pathlib import Path - import pytest from tests.e2e.metrics.infrastructure import MetricsStack @pytest.fixture(autouse=True, scope="module") -def infrastructure(request: pytest.FixtureRequest, lambda_layer_arn: str): +def infrastructure(tmp_path_factory, worker_id): """Setup and teardown logic for E2E test infrastructure - Parameters - ---------- - request : pytest.FixtureRequest - pytest request fixture to introspect absolute path to test being executed - lambda_layer_arn : str - Lambda Layer ARN - Yields ------ Dict[str, str] CloudFormation Outputs from deployed infrastructure """ - stack = MetricsStack(handlers_dir=Path(f"{request.path.parent}/handlers"), layer_arn=lambda_layer_arn) + stack = MetricsStack() try: yield stack.deploy() finally: diff --git a/tests/e2e/metrics/infrastructure.py b/tests/e2e/metrics/infrastructure.py index 9afa59bb5cd..7cc1eb8c498 100644 --- a/tests/e2e/metrics/infrastructure.py +++ b/tests/e2e/metrics/infrastructure.py @@ -1,13 +1,6 @@ -from pathlib import Path - from tests.e2e.utils.infrastructure import BaseInfrastructure class MetricsStack(BaseInfrastructure): - FEATURE_NAME = "metrics" - - def __init__(self, handlers_dir: Path, feature_name: str = FEATURE_NAME, layer_arn: str = "") -> None: - super().__init__(feature_name, handlers_dir, layer_arn) - def create_resources(self): self.create_lambda_functions() diff --git a/tests/e2e/tracer/conftest.py b/tests/e2e/tracer/conftest.py index 3b724bf1247..afb34ffee2b 100644 --- a/tests/e2e/tracer/conftest.py +++ b/tests/e2e/tracer/conftest.py @@ -1,27 +1,19 @@ -from pathlib import Path - import pytest from tests.e2e.tracer.infrastructure import TracerStack @pytest.fixture(autouse=True, scope="module") -def infrastructure(request: pytest.FixtureRequest, lambda_layer_arn: str): +def infrastructure(): """Setup and teardown logic for E2E test infrastructure - Parameters - ---------- - request : pytest.FixtureRequest - pytest request fixture to introspect absolute path to test being executed - lambda_layer_arn : str - Lambda Layer ARN Yields ------ Dict[str, str] CloudFormation Outputs from deployed infrastructure """ - stack = TracerStack(handlers_dir=Path(f"{request.path.parent}/handlers"), layer_arn=lambda_layer_arn) + stack = TracerStack() try: yield stack.deploy() finally: diff --git a/tests/e2e/tracer/handlers/async_capture.py b/tests/e2e/tracer/handlers/async_capture.py index b19840a6f69..814e0b92e02 100644 --- a/tests/e2e/tracer/handlers/async_capture.py +++ b/tests/e2e/tracer/handlers/async_capture.py @@ -13,4 +13,5 @@ async def async_get_users(): def lambda_handler(event: dict, context: LambdaContext): + tracer.service = event.get("service") return asyncio.run(async_get_users()) diff --git a/tests/e2e/tracer/handlers/basic_handler.py b/tests/e2e/tracer/handlers/basic_handler.py index ba94c845ace..89a6b062423 100644 --- a/tests/e2e/tracer/handlers/basic_handler.py +++ b/tests/e2e/tracer/handlers/basic_handler.py @@ -13,4 +13,5 @@ def get_todos(): @tracer.capture_lambda_handler def lambda_handler(event: dict, context: LambdaContext): + tracer.service = event.get("service") return get_todos() diff --git a/tests/e2e/tracer/handlers/same_function_name.py b/tests/e2e/tracer/handlers/same_function_name.py index 78ef99d42fa..240e3329bc8 100644 --- a/tests/e2e/tracer/handlers/same_function_name.py +++ b/tests/e2e/tracer/handlers/same_function_name.py @@ -26,6 +26,8 @@ def get_all(self): def lambda_handler(event: dict, context: LambdaContext): + # Maintenance: create a public method to set these explicitly + tracer.service = event["service"] todos = Todos() comments = Comments() diff --git a/tests/e2e/tracer/infrastructure.py b/tests/e2e/tracer/infrastructure.py index 9b388558c0b..8562359acf0 100644 --- a/tests/e2e/tracer/infrastructure.py +++ b/tests/e2e/tracer/infrastructure.py @@ -1,18 +1,6 @@ -from pathlib import Path - -from tests.e2e.utils.data_builder import build_service_name from tests.e2e.utils.infrastructure import BaseInfrastructure class TracerStack(BaseInfrastructure): - # Maintenance: Tracer doesn't support dynamic service injection (tracer.py L310) - # we could move after handler response or adopt env vars usage in e2e tests - SERVICE_NAME: str = build_service_name() - FEATURE_NAME = "tracer" - - def __init__(self, handlers_dir: Path, feature_name: str = FEATURE_NAME, layer_arn: str = "") -> None: - super().__init__(feature_name, handlers_dir, layer_arn) - def create_resources(self) -> None: - env_vars = {"POWERTOOLS_SERVICE_NAME": self.SERVICE_NAME} - self.create_lambda_functions(function_props={"environment": env_vars}) + self.create_lambda_functions() diff --git a/tests/e2e/tracer/test_tracer.py b/tests/e2e/tracer/test_tracer.py index de25bc02ebf..e2abc5af6bc 100644 --- a/tests/e2e/tracer/test_tracer.py +++ b/tests/e2e/tracer/test_tracer.py @@ -1,7 +1,8 @@ +import json + import pytest from tests.e2e.tracer.handlers import async_capture, basic_handler -from tests.e2e.tracer.infrastructure import TracerStack from tests.e2e.utils import data_builder, data_fetcher @@ -37,6 +38,7 @@ def async_fn(infrastructure: dict) -> str: def test_lambda_handler_trace_is_visible(basic_handler_fn_arn: str, basic_handler_fn: str): # GIVEN + service = data_builder.build_service_name() handler_name = basic_handler.lambda_handler.__name__ handler_subsegment = f"## {handler_name}" handler_metadata_key = f"{handler_name} response" @@ -48,21 +50,23 @@ def test_lambda_handler_trace_is_visible(basic_handler_fn_arn: str, basic_handle trace_query = data_builder.build_trace_default_query(function_name=basic_handler_fn) # WHEN - _, execution_time = data_fetcher.get_lambda_response(lambda_arn=basic_handler_fn_arn) - data_fetcher.get_lambda_response(lambda_arn=basic_handler_fn_arn) + event = json.dumps({"service": service}) + _, execution_time = data_fetcher.get_lambda_response(lambda_arn=basic_handler_fn_arn, payload=event) + data_fetcher.get_lambda_response(lambda_arn=basic_handler_fn_arn, payload=event) # THEN trace = data_fetcher.get_traces(start_date=execution_time, filter_expression=trace_query, minimum_traces=2) assert len(trace.get_annotation(key="ColdStart", value=True)) == 1 - assert len(trace.get_metadata(key=handler_metadata_key, namespace=TracerStack.SERVICE_NAME)) == 2 - assert len(trace.get_metadata(key=method_metadata_key, namespace=TracerStack.SERVICE_NAME)) == 2 + assert len(trace.get_metadata(key=handler_metadata_key, namespace=service)) == 2 + assert len(trace.get_metadata(key=method_metadata_key, namespace=service)) == 2 assert len(trace.get_subsegment(name=handler_subsegment)) == 2 assert len(trace.get_subsegment(name=method_subsegment)) == 2 def test_lambda_handler_trace_multiple_functions_same_name(same_function_name_arn: str, same_function_name_fn: str): # GIVEN + service = data_builder.build_service_name() method_name_todos = "same_function_name.Todos.get_all" method_subsegment_todos = f"## {method_name_todos}" method_metadata_key_todos = f"{method_name_todos} response" @@ -74,19 +78,21 @@ def test_lambda_handler_trace_multiple_functions_same_name(same_function_name_ar trace_query = data_builder.build_trace_default_query(function_name=same_function_name_fn) # WHEN - _, execution_time = data_fetcher.get_lambda_response(lambda_arn=same_function_name_arn) + event = json.dumps({"service": service}) + _, execution_time = data_fetcher.get_lambda_response(lambda_arn=same_function_name_arn, payload=event) # THEN trace = data_fetcher.get_traces(start_date=execution_time, filter_expression=trace_query) - assert len(trace.get_metadata(key=method_metadata_key_todos, namespace=TracerStack.SERVICE_NAME)) == 1 - assert len(trace.get_metadata(key=method_metadata_key_comments, namespace=TracerStack.SERVICE_NAME)) == 1 + assert len(trace.get_metadata(key=method_metadata_key_todos, namespace=service)) == 1 + assert len(trace.get_metadata(key=method_metadata_key_comments, namespace=service)) == 1 assert len(trace.get_subsegment(name=method_subsegment_todos)) == 1 assert len(trace.get_subsegment(name=method_subsegment_comments)) == 1 def test_async_trace_is_visible(async_fn_arn: str, async_fn: str): # GIVEN + service = data_builder.build_service_name() async_fn_name = f"async_capture.{async_capture.async_get_users.__name__}" async_fn_name_subsegment = f"## {async_fn_name}" async_fn_name_metadata_key = f"{async_fn_name} response" @@ -94,10 +100,11 @@ def test_async_trace_is_visible(async_fn_arn: str, async_fn: str): trace_query = data_builder.build_trace_default_query(function_name=async_fn) # WHEN - _, execution_time = data_fetcher.get_lambda_response(lambda_arn=async_fn_arn) + event = json.dumps({"service": service}) + _, execution_time = data_fetcher.get_lambda_response(lambda_arn=async_fn_arn, payload=event) # THEN trace = data_fetcher.get_traces(start_date=execution_time, filter_expression=trace_query) assert len(trace.get_subsegment(name=async_fn_name_subsegment)) == 1 - assert len(trace.get_metadata(key=async_fn_name_metadata_key, namespace=TracerStack.SERVICE_NAME)) == 1 + assert len(trace.get_metadata(key=async_fn_name_metadata_key, namespace=service)) == 1 diff --git a/tests/e2e/utils/Dockerfile b/tests/e2e/utils/Dockerfile deleted file mode 100644 index 586847bb3fa..00000000000 --- a/tests/e2e/utils/Dockerfile +++ /dev/null @@ -1,14 +0,0 @@ -# Image used by CDK's LayerVersion construct to create Lambda Layer with Powertools -# library code. -# The correct AWS SAM build image based on the runtime of the function will be -# passed as build arg. The default allows to do `docker build .` when testing. -ARG IMAGE=public.ecr.aws/sam/build-python3.7 -FROM $IMAGE - -ARG PIP_INDEX_URL -ARG PIP_EXTRA_INDEX_URL -ARG HTTPS_PROXY - -RUN pip install --upgrade pip - -CMD [ "python" ] diff --git a/tests/e2e/utils/asset.py b/tests/e2e/utils/asset.py deleted file mode 100644 index db9e7299d1a..00000000000 --- a/tests/e2e/utils/asset.py +++ /dev/null @@ -1,147 +0,0 @@ -import io -import json -import logging -import zipfile -from pathlib import Path -from typing import Dict, List, Optional - -import boto3 -import botocore.exceptions -from mypy_boto3_s3 import S3Client -from pydantic import BaseModel, Field - -logger = logging.getLogger(__name__) - - -class AssetManifest(BaseModel): - path: str - packaging: str - - -class AssetTemplateConfigDestinationsAccount(BaseModel): - bucket_name: str = Field(str, alias="bucketName") - object_key: str = Field(str, alias="objectKey") - assume_role_arn: str = Field(str, alias="assumeRoleArn") - - -class AssetTemplateConfigDestinations(BaseModel): - current_account_current_region: AssetTemplateConfigDestinationsAccount = Field( - AssetTemplateConfigDestinationsAccount, alias="current_account-current_region" - ) - - -class AssetTemplateConfig(BaseModel): - source: AssetManifest - destinations: AssetTemplateConfigDestinations - - -class TemplateAssembly(BaseModel): - version: str - files: Dict[str, AssetTemplateConfig] - - -class Asset: - def __init__( - self, config: AssetTemplateConfig, account_id: str, region: str, boto3_client: Optional[S3Client] = None - ) -> None: - """CDK Asset logic to verify existence and resolve deeply nested configuration - - Parameters - ---------- - config : AssetTemplateConfig - CDK Asset configuration found in synthesized template - account_id : str - AWS Account ID - region : str - AWS Region - boto3_client : Optional["S3Client"], optional - S3 client instance for asset operations, by default None - """ - self.config = config - self.s3 = boto3_client or boto3.client("s3") - self.account_id = account_id - self.region = region - self.asset_path = config.source.path - self.asset_packaging = config.source.packaging - self.object_key = config.destinations.current_account_current_region.object_key - self._bucket = config.destinations.current_account_current_region.bucket_name - self.bucket_name = self._resolve_bucket_name() - - @property - def is_zip(self): - return self.asset_packaging == "zip" - - def exists_in_s3(self, key: str) -> bool: - try: - return self.s3.head_object(Bucket=self.bucket_name, Key=key) is not None - except botocore.exceptions.ClientError: - return False - - def _resolve_bucket_name(self) -> str: - return self._bucket.replace("${AWS::AccountId}", self.account_id).replace("${AWS::Region}", self.region) - - -class Assets: - def __init__( - self, asset_manifest: Path, account_id: str, region: str, boto3_client: Optional[S3Client] = None - ) -> None: - """CDK Assets logic to find each asset, compress, and upload - - Parameters - ---------- - asset_manifest : Path - Asset manifest JSON file (self.__synthesize) - account_id : str - AWS Account ID - region : str - AWS Region - boto3_client : Optional[S3Client], optional - S3 client instance for asset operations, by default None - """ - self.asset_manifest = asset_manifest - self.account_id = account_id - self.region = region - self.s3 = boto3_client or boto3.client("s3") - self.assets = self._find_assets_from_template() - self.assets_location = str(self.asset_manifest.parent) - - def upload(self): - """Drop-in replacement for cdk-assets package s3 upload part. - https://www.npmjs.com/package/cdk-assets. - We use custom solution to avoid dependencies from nodejs ecosystem. - We follow the same design cdk-assets: - https://github.com/aws/aws-cdk-rfcs/blob/master/text/0092-asset-publishing.md. - """ - logger.debug(f"Upload {len(self.assets)} assets") - for asset in self.assets: - if not asset.is_zip: - logger.debug(f"Asset '{asset.object_key}' is not zip. Skipping upload.") - continue - - if asset.exists_in_s3(key=asset.object_key): - logger.debug(f"Asset '{asset.object_key}' already exists in S3. Skipping upload.") - continue - - archive = self._compress_assets(asset) - logger.debug("Uploading archive to S3") - self.s3.upload_fileobj(Fileobj=archive, Bucket=asset.bucket_name, Key=asset.object_key) - logger.debug("Successfully uploaded") - - def _find_assets_from_template(self) -> List[Asset]: - data = json.loads(self.asset_manifest.read_text()) - template = TemplateAssembly(**data) - return [ - Asset(config=asset_config, account_id=self.account_id, region=self.region) - for asset_config in template.files.values() - ] - - def _compress_assets(self, asset: Asset) -> io.BytesIO: - buf = io.BytesIO() - asset_dir = f"{self.assets_location}/{asset.asset_path}" - asset_files = list(Path(asset_dir).rglob("*")) - with zipfile.ZipFile(buf, "w", compression=zipfile.ZIP_DEFLATED) as archive: - for asset_file in asset_files: - logger.debug(f"Adding file '{asset_file}' to the archive.") - archive.write(asset_file, arcname=asset_file.relative_to(asset_dir)) - buf.seek(0) - return buf diff --git a/tests/e2e/utils/base.py b/tests/e2e/utils/base.py new file mode 100644 index 00000000000..2a6e6032e52 --- /dev/null +++ b/tests/e2e/utils/base.py @@ -0,0 +1,20 @@ +from abc import ABC, abstractmethod +from typing import Dict, Optional + + +class InfrastructureProvider(ABC): + @abstractmethod + def create_lambda_functions(self, function_props: Optional[Dict] = None) -> Dict: + pass + + @abstractmethod + def deploy(self) -> Dict[str, str]: + pass + + @abstractmethod + def delete(self): + pass + + @abstractmethod + def create_resources(self): + pass diff --git a/tests/e2e/utils/constants.py b/tests/e2e/utils/constants.py new file mode 100644 index 00000000000..445c9f00113 --- /dev/null +++ b/tests/e2e/utils/constants.py @@ -0,0 +1,8 @@ +import sys + +from aws_lambda_powertools import PACKAGE_PATH + +PYTHON_RUNTIME_VERSION = f"V{''.join(map(str, sys.version_info[:2]))}" +SOURCE_CODE_ROOT_PATH = PACKAGE_PATH.parent +CDK_OUT_PATH = SOURCE_CODE_ROOT_PATH / "cdk.out" +LAYER_BUILD_PATH = CDK_OUT_PATH / "layer_build" diff --git a/tests/e2e/utils/infrastructure.py b/tests/e2e/utils/infrastructure.py index 97714b95cfc..82d0463b2aa 100644 --- a/tests/e2e/utils/infrastructure.py +++ b/tests/e2e/utils/infrastructure.py @@ -1,73 +1,58 @@ import json import logging +import os +import subprocess import sys -from abc import ABC, abstractmethod -from enum import Enum +import textwrap from pathlib import Path -from typing import Dict, Generator, Optional, Tuple, Type +from typing import Callable, Dict, Generator, Optional from uuid import uuid4 import boto3 import pytest -import yaml -from aws_cdk import ( - App, - AssetStaging, - BundlingOptions, - CfnOutput, - DockerImage, - RemovalPolicy, - Stack, - aws_logs, -) +from aws_cdk import App, CfnOutput, Environment, RemovalPolicy, Stack, aws_logs from aws_cdk.aws_lambda import Code, Function, LayerVersion, Runtime, Tracing from filelock import FileLock from mypy_boto3_cloudformation import CloudFormationClient -from aws_lambda_powertools import PACKAGE_PATH -from tests.e2e.utils.asset import Assets - -PYTHON_RUNTIME_VERSION = f"V{''.join(map(str, sys.version_info[:2]))}" -SOURCE_CODE_ROOT_PATH = PACKAGE_PATH.parent +from tests.e2e.utils.base import InfrastructureProvider +from tests.e2e.utils.constants import CDK_OUT_PATH, PYTHON_RUNTIME_VERSION, SOURCE_CODE_ROOT_PATH +from tests.e2e.utils.lambda_layer.powertools_layer import LocalLambdaPowertoolsLayer logger = logging.getLogger(__name__) -class BaseInfrastructureStack(ABC): - @abstractmethod - def synthesize(self) -> Tuple[dict, str]: - ... - - @abstractmethod - def __call__(self) -> Tuple[dict, str]: - ... - - -class PythonVersion(Enum): - V37 = {"runtime": Runtime.PYTHON_3_7, "image": Runtime.PYTHON_3_7.bundling_image.image} - V38 = {"runtime": Runtime.PYTHON_3_8, "image": Runtime.PYTHON_3_8.bundling_image.image} - V39 = {"runtime": Runtime.PYTHON_3_9, "image": Runtime.PYTHON_3_9.bundling_image.image} +class BaseInfrastructure(InfrastructureProvider): + RANDOM_STACK_VALUE: str = f"{uuid4()}" - -class BaseInfrastructure(ABC): - def __init__(self, feature_name: str, handlers_dir: Path, layer_arn: str = "") -> None: - self.feature_name = feature_name - self.stack_name = f"test{PYTHON_RUNTIME_VERSION}-{feature_name}-{uuid4()}" - self.handlers_dir = handlers_dir - self.layer_arn = layer_arn + def __init__(self) -> None: + self.feature_path = Path(sys.modules[self.__class__.__module__].__file__).parent # absolute path to feature + self.feature_name = self.feature_path.parts[-1].replace("_", "-") # logger, tracer, event-handler, etc. + self.stack_name = f"test{PYTHON_RUNTIME_VERSION}-{self.feature_name}-{self.RANDOM_STACK_VALUE}" self.stack_outputs: Dict[str, str] = {} - # NOTE: Investigate why cdk.Environment in Stack - # changes synthesized asset (no object_key in asset manifest) - self.app = App(outdir=str(SOURCE_CODE_ROOT_PATH / ".cdk")) - self.stack = Stack(self.app, self.stack_name) + # NOTE: CDK stack account and region are tokens, we need to resolve earlier self.session = boto3.Session() self.cfn: CloudFormationClient = self.session.client("cloudformation") - - # NOTE: CDK stack account and region are tokens, we need to resolve earlier self.account_id = self.session.client("sts").get_caller_identity()["Account"] self.region = self.session.region_name + self.app = App() + self.stack = Stack(self.app, self.stack_name, env=Environment(account=self.account_id, region=self.region)) + + # NOTE: Introspect feature details to generate CDK App (_create_temp_cdk_app method), Synth and Deployment + self._feature_infra_class_name = self.__class__.__name__ + self._feature_infra_module_path = self.feature_path / "infrastructure" + self._feature_infra_file = self.feature_path / "infrastructure.py" + self._handlers_dir = self.feature_path / "handlers" + self._cdk_out_dir: Path = CDK_OUT_PATH / self.feature_name + self._stack_outputs_file = f'{self._cdk_out_dir / "stack_outputs.json"}' + + if not self._feature_infra_file.exists(): + raise FileNotFoundError( + "You must have your infrastructure defined in 'tests/e2e//infrastructure.py'." + ) + def create_lambda_functions(self, function_props: Optional[Dict] = None) -> Dict[str, Function]: """Create Lambda functions available under handlers_dir @@ -102,16 +87,28 @@ def create_lambda_functions(self, function_props: Optional[Dict] = None) -> Dict self.create_lambda_functions(function_props={"runtime": Runtime.PYTHON_3_7) ``` """ - handlers = list(self.handlers_dir.rglob("*.py")) - source = Code.from_asset(f"{self.handlers_dir}") + if not self._handlers_dir.exists(): + raise RuntimeError(f"Handlers dir '{self._handlers_dir}' must exist for functions to be created.") + + layer_build = LocalLambdaPowertoolsLayer().build() + layer = LayerVersion( + self.stack, + "aws-lambda-powertools-e2e-test", + layer_version_name="aws-lambda-powertools-e2e-test", + compatible_runtimes=[ + Runtime.PYTHON_3_7, + Runtime.PYTHON_3_8, + Runtime.PYTHON_3_9, + ], + code=Code.from_asset(path=layer_build), + ) + + # NOTE: Agree on a convention if we need to support multi-file handlers + # as we're simply taking any file under `handlers/` to be a Lambda function. + handlers = list(self._handlers_dir.rglob("*.py")) + source = Code.from_asset(f"{self._handlers_dir}") logger.debug(f"Creating functions for handlers: {handlers}") - if not self.layer_arn: - raise ValueError( - """Lambda Layer ARN cannot be empty when creating Lambda functions. - Make sure to inject `lambda_layer_arn` fixture and pass at the constructor level""" - ) - layer = LayerVersion.from_layer_version_arn(self.stack, "layer-arn", layer_version_arn=self.layer_arn) function_settings_override = function_props or {} output: Dict[str, Function] = {} @@ -147,25 +144,86 @@ def create_lambda_functions(self, function_props: Optional[Dict] = None) -> Dict return output def deploy(self) -> Dict[str, str]: - """Creates CloudFormation Stack and return stack outputs as dict + """Synthesize and deploy a CDK app, and return its stack outputs + + NOTE: It auto-generates a temporary CDK app to benefit from CDK CLI lookup features Returns ------- Dict[str, str] CloudFormation Stack Outputs with output key and value """ - template, asset_manifest_file = self._synthesize() - assets = Assets(asset_manifest=asset_manifest_file, account_id=self.account_id, region=self.region) - assets.upload() - self.stack_outputs = self._deploy_stack(self.stack_name, template) - return self.stack_outputs + stack_file = self._create_temp_cdk_app() + synth_command = f"npx cdk synth --app 'python {stack_file}' -o {self._cdk_out_dir}" + deploy_command = ( + f"npx cdk deploy --app '{self._cdk_out_dir}' -O {self._stack_outputs_file} --require-approval=never" + ) + + # CDK launches a background task, so we must wait + subprocess.check_output(synth_command, shell=True) + subprocess.check_output(deploy_command, shell=True) + return self._read_stack_output() def delete(self) -> None: """Delete CloudFormation Stack""" logger.debug(f"Deleting stack: {self.stack_name}") self.cfn.delete_stack(StackName=self.stack_name) - @abstractmethod + def _sync_stack_name(self, stack_output: Dict): + """Synchronize initial stack name with CDK final stack name + + When using `cdk synth` with context methods (`from_lookup`), + CDK can initialize the Stack multiple times until it resolves + the context. + + Parameters + ---------- + stack_output : Dict + CDK CloudFormation Outputs, where the key is the stack name + """ + self.stack_name = list(stack_output.keys())[0] + + def _read_stack_output(self): + content = Path(self._stack_outputs_file).read_text() + outputs: Dict = json.loads(content) + self._sync_stack_name(stack_output=outputs) + + # discard stack_name and get outputs as dict + self.stack_outputs = list(outputs.values())[0] + return self.stack_outputs + + def _create_temp_cdk_app(self): + """Autogenerate a CDK App with our Stack so that CDK CLI can deploy it + + This allows us to keep our BaseInfrastructure while supporting context lookups. + """ + # cdk.out/tracer/cdk_app_v39.py + temp_file = self._cdk_out_dir / f"cdk_app_{PYTHON_RUNTIME_VERSION}.py" + + if temp_file.exists(): + # no need to regenerate CDK app since it's just boilerplate + return temp_file + + # Convert from POSIX path to Python module: tests.e2e.tracer.infrastructure + infra_module = str(self._feature_infra_module_path.relative_to(SOURCE_CODE_ROOT_PATH)).replace(os.sep, ".") + + code = f""" + from {infra_module} import {self._feature_infra_class_name} + stack = {self._feature_infra_class_name}() + stack.create_resources() + stack.app.synth() + """ + + if not self._cdk_out_dir.is_dir(): + self._cdk_out_dir.mkdir(parents=True, exist_ok=True) + + with temp_file.open("w") as fd: + fd.write(textwrap.dedent(code)) + + # allow CDK to read/execute file for stack deployment + temp_file.chmod(0o755) + return temp_file + def create_resources(self) -> None: """Create any necessary CDK resources. It'll be called before deploy @@ -189,34 +247,7 @@ def created_resources(self): self.create_lambda_functions() ``` """ - ... - - def _synthesize(self) -> Tuple[Dict, Path]: - logger.debug("Creating CDK Stack resources") - self.create_resources() - logger.debug("Synthesizing CDK Stack into raw CloudFormation template") - cloud_assembly = self.app.synth() - cf_template: Dict = cloud_assembly.get_stack_by_name(self.stack_name).template - cloud_assembly_assets_manifest_path: str = ( - cloud_assembly.get_stack_by_name(self.stack_name).dependencies[0].file # type: ignore[attr-defined] - ) - return cf_template, Path(cloud_assembly_assets_manifest_path) - - def _deploy_stack(self, stack_name: str, template: Dict) -> Dict[str, str]: - logger.debug(f"Creating CloudFormation Stack: {stack_name}") - self.cfn.create_stack( - StackName=stack_name, - TemplateBody=yaml.dump(template), - TimeoutInMinutes=10, - OnFailure="ROLLBACK", - Capabilities=["CAPABILITY_IAM"], - ) - waiter = self.cfn.get_waiter("stack_create_complete") - waiter.wait(StackName=stack_name, WaiterConfig={"Delay": 10, "MaxAttempts": 50}) - - stack_details = self.cfn.describe_stacks(StackName=stack_name) - stack_outputs = stack_details["Stacks"][0]["Outputs"] - return {output["OutputKey"]: output["OutputValue"] for output in stack_outputs if output["OutputKey"]} + raise NotImplementedError() def add_cfn_output(self, name: str, value: str, arn: str = ""): """Create {Name} and optionally {Name}Arn CloudFormation Outputs. @@ -235,88 +266,50 @@ def add_cfn_output(self, name: str, value: str, arn: str = ""): CfnOutput(self.stack, f"{name}Arn", value=arn) -def deploy_once( - stack: Type[BaseInfrastructure], - request: pytest.FixtureRequest, +def call_once( + task: Callable, tmp_path_factory: pytest.TempPathFactory, worker_id: str, - layer_arn: str, -) -> Generator[Dict[str, str], None, None]: - """Deploys provided stack once whether CPU parallelization is enabled or not + callback: Optional[Callable] = None, +) -> Generator[object, None, None]: + """Call function and serialize results once whether CPU parallelization is enabled or not Parameters ---------- - stack : Type[BaseInfrastructure] - stack class to instantiate and deploy, for example MetricStack. - Not to be confused with class instance (MetricStack()). - request : pytest.FixtureRequest - pytest request fixture to introspect absolute path to test being executed + task : Callable + Function to call once and JSON serialize result whether parallel test is enabled or not. tmp_path_factory : pytest.TempPathFactory pytest temporary path factory to discover shared tmp when multiple CPU processes are spun up worker_id : str pytest-xdist worker identification to detect whether parallelization is enabled + callback : Callable + Function to call when job is complete. Yields ------ - Generator[Dict[str, str], None, None] - stack CloudFormation outputs + Generator[object, None, None] + Callable output when called """ - handlers_dir = f"{request.node.path.parent}/handlers" - stack = stack(handlers_dir=Path(handlers_dir), layer_arn=layer_arn) try: if worker_id == "master": - # no parallelization, deploy stack and let fixture be cached - yield stack.deploy() + # no parallelization, call and return + yield task() else: # tmp dir shared by all workers root_tmp_dir = tmp_path_factory.getbasetemp().parent cache = root_tmp_dir / f"{PYTHON_RUNTIME_VERSION}_cache.json" with FileLock(f"{cache}.lock"): - # If cache exists, return stack outputs back + # If cache exists, return task outputs back # otherwise it's the first run by the main worker - # deploy and return stack outputs so subsequent workers can reuse + # run and return task outputs for subsequent workers reuse if cache.is_file(): - stack_outputs = json.loads(cache.read_text()) + callable_result = json.loads(cache.read_text()) else: - stack_outputs: Dict = stack.deploy() - cache.write_text(json.dumps(stack_outputs)) - yield stack_outputs + callable_result: Dict = task() + cache.write_text(json.dumps(callable_result)) + yield callable_result finally: - stack.delete() - - -class LambdaLayerStack(BaseInfrastructure): - FEATURE_NAME = "lambda-layer" - - def __init__(self, handlers_dir: Path, feature_name: str = FEATURE_NAME, layer_arn: str = "") -> None: - super().__init__(feature_name, handlers_dir, layer_arn) - - def create_resources(self): - layer = self._create_layer() - CfnOutput(self.stack, "LayerArn", value=layer) - - def _create_layer(self) -> str: - logger.debug("Creating Lambda Layer with latest source code available") - output_dir = Path(str(AssetStaging.BUNDLING_OUTPUT_DIR), "python") - input_dir = Path(str(AssetStaging.BUNDLING_INPUT_DIR), "aws_lambda_powertools") - - build_commands = [f"pip install .[pydantic] -t {output_dir}", f"cp -R {input_dir} {output_dir}"] - layer = LayerVersion( - self.stack, - "aws-lambda-powertools-e2e-test", - layer_version_name="aws-lambda-powertools-e2e-test", - compatible_runtimes=[PythonVersion[PYTHON_RUNTIME_VERSION].value["runtime"]], - code=Code.from_asset( - path=str(SOURCE_CODE_ROOT_PATH), - bundling=BundlingOptions( - image=DockerImage.from_build( - str(Path(__file__).parent), - build_args={"IMAGE": PythonVersion[PYTHON_RUNTIME_VERSION].value["image"]}, - ), - command=["bash", "-c", " && ".join(build_commands)], - ), - ), - ) - return layer.layer_version_arn + if callback is not None: + callback() diff --git a/tests/e2e/utils/lambda_layer/__init__.py b/tests/e2e/utils/lambda_layer/__init__.py new file mode 100644 index 00000000000..e69de29bb2d diff --git a/tests/e2e/utils/lambda_layer/base.py b/tests/e2e/utils/lambda_layer/base.py new file mode 100644 index 00000000000..280fe19d4f8 --- /dev/null +++ b/tests/e2e/utils/lambda_layer/base.py @@ -0,0 +1,32 @@ +from abc import ABC, abstractmethod +from pathlib import Path + + +class BaseLocalLambdaLayer(ABC): + def __init__(self, output_dir: Path): + self.output_dir = output_dir / "layer_build" + self.target_dir = f"{self.output_dir}/python" + + @abstractmethod + def build(self) -> str: + """Builds a Lambda Layer locally + + Returns + ------- + build_path : str + Path where newly built Lambda Layer is + """ + raise NotImplementedError() + + def before_build(self): + """Any step to run before build process begins. + + By default, it creates output dir and its parents if it doesn't exist. + """ + if not self.output_dir.exists(): + # Create missing parent directories if missing + self.output_dir.mkdir(parents=True, exist_ok=True) + + def after_build(self): + """Any step after a build succeed""" + ... diff --git a/tests/e2e/utils/lambda_layer/powertools_layer.py b/tests/e2e/utils/lambda_layer/powertools_layer.py new file mode 100644 index 00000000000..45a22547715 --- /dev/null +++ b/tests/e2e/utils/lambda_layer/powertools_layer.py @@ -0,0 +1,48 @@ +import logging +import subprocess +from pathlib import Path + +from checksumdir import dirhash + +from aws_lambda_powertools import PACKAGE_PATH +from tests.e2e.utils.constants import CDK_OUT_PATH, SOURCE_CODE_ROOT_PATH +from tests.e2e.utils.lambda_layer.base import BaseLocalLambdaLayer + +logger = logging.getLogger(__name__) + + +class LocalLambdaPowertoolsLayer(BaseLocalLambdaLayer): + IGNORE_EXTENSIONS = ["pyc"] + + def __init__(self, output_dir: Path = CDK_OUT_PATH): + super().__init__(output_dir) + self.package = f"{SOURCE_CODE_ROOT_PATH}[pydantic]" + self.build_args = "--platform manylinux1_x86_64 --only-binary=:all: --upgrade" + self.build_command = f"python -m pip install {self.package} {self.build_args} --target {self.target_dir}" + self.source_diff_file: Path = CDK_OUT_PATH / "layer_build.diff" + + def build(self) -> str: + self.before_build() + + if self._has_source_changed(): + subprocess.run(self.build_command, shell=True) + + self.after_build() + + return str(self.output_dir) + + def _has_source_changed(self) -> bool: + """Hashes source code and + + Returns + ------- + change : bool + Whether source code hash has changed + """ + diff = self.source_diff_file.read_text() if self.source_diff_file.exists() else "" + new_diff = dirhash(dirname=PACKAGE_PATH, excluded_extensions=self.IGNORE_EXTENSIONS) + if new_diff != diff or not self.output_dir.exists(): + self.source_diff_file.write_text(new_diff) + return True + + return False