-
Notifications
You must be signed in to change notification settings - Fork 400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Mechanism for end2end testing #1226
Comments
Thanks for opening your first issue here! We'll come back to you as soon as we can. |
Hey @mploski, thanks a lot for pulling that together. I liked it!! I have some ideas on how we can improve the RFC before you ship the PR you have staged:
These help readers to stay focused on the mechanics and whether they can see any immediate areas for improvement, and worry less about optimizing the implementations early in the process. A diagram of the workflow could also help, especially now that GitHub has native Mermaid integration, but don't worry about it as I know bandwidth is limited. Feel free to ping me on areas you'd like to offload to me too <3 Thanks! |
Hey @heitorlessa, thanks for your feedback. I restructured this RFC based on your suggestion. Can you take a look one more time? |
This is much much better, thank you for taking all this time @mploski - I've made line editing by re-arranging some content, shorten a few, while keeping the same meaning. Please have another look in case I accidentally removed anything critical to the idea. I removed the Please go ahead with the PR @mploski ;-) |
Added small adjustments around infrastructure class re-usage. Doc looks much more concise @heitorlessa, thx! Will issue PR shortly (within this week) |
Awesome, thank you @mploski. I've asked @alexpulver to help review the PR once it's up as a CDK expert. |
@heitorlessa FYI Still working on a PR, found some issues I want to solve first ( too-fat generated powertools layer or AWS API throttling causing tests errors if we run our tests from Github Actions with runtime matrix). |
hey alexpulver. Could take a look at this PR also? Thx :-) |
Hey! Left some comments on the PR itself. Regarding CF Stack vs CDK Hotswap vs Direct API Hotswap - what about using CDK Hotswap AND multiple Lambda functions? |
Small update to clarify that we will run E2E tests on every merge to Last thing to test is whether it's worth using Thank you so much again @mploski for starting this, going through all comments, and such. |
Based on the discussion with @alexpulver I analysed the code of cdk-assets npm package and also design for it: https://github.com/aws/aws-cdk-rfcs/blob/master/text/0092-asset-publishing.md. Currently I'm updating the PR to rely on this design for my custom asset generation and later this week I will compare it with npm cdk-assets direct usage for assets upload. |
This is now merged :) We'll be communicating more details as part of the release. Additional enhancements and E2E tests for other utilities will be dealt separately. HUGE thank you @mploski for going through this length - intense review process, benchmarking multiple options, documentation, etc - and thanks for all reviewers, truly; it takes a village! |
|
Updating here for future correctness, as we moved to CDK CLI due to context methods Pasting the section in the maintainers playbook about the framework. E2E frameworkStructureOur E2E framework relies on Pytest fixtures to coordinate infrastructure and test parallelization - see Test Parallelization and CDK CLI Parallelization. tests/e2e structure .
├── __init__.py
├── conftest.py # builds Lambda Layer once
├── logger
│ ├── __init__.py
│ ├── conftest.py # deploys LoggerStack
│ ├── handlers
│ │ └── basic_handler.py
│ ├── infrastructure.py # LoggerStack definition
│ └── test_logger.py
├── metrics
│ ├── __init__.py
│ ├── conftest.py # deploys MetricsStack
│ ├── handlers
│ │ ├── basic_handler.py
│ │ └── cold_start.py
│ ├── infrastructure.py # MetricsStack definition
│ └── test_metrics.py
├── tracer
│ ├── __init__.py
│ ├── conftest.py # deploys TracerStack
│ ├── handlers
│ │ ├── async_capture.py
│ │ └── basic_handler.py
│ ├── infrastructure.py # TracerStack definition
│ └── test_tracer.py
└── utils
├── __init__.py
├── data_builder # build_service_name(), build_add_dimensions_input, etc.
├── data_fetcher # get_traces(), get_logs(), get_lambda_response(), etc.
├── infrastructure.py # base infrastructure like deploy logic, etc. Where:
MechanicsUnder This allows us to benefit from test and deployment parallelization, use IDE step-through debugging for a single test, run one, subset, or all tests and only deploy their related infrastructure, without any custom configuration.
classDiagram
class InfrastructureProvider {
<<interface>>
+deploy() Dict
+delete()
+create_resources()
+create_lambda_functions() Dict~Functions~
}
class BaseInfrastructure {
+deploy() Dict
+delete()
+create_lambda_functions() Dict~Functions~
+add_cfn_output()
}
class TracerStack {
+create_resources()
}
class LoggerStack {
+create_resources()
}
class MetricsStack {
+create_resources()
}
class EventHandlerStack {
+create_resources()
}
InfrastructureProvider <|-- BaseInfrastructure : implement
BaseInfrastructure <|-- TracerStack : inherit
BaseInfrastructure <|-- LoggerStack : inherit
BaseInfrastructure <|-- MetricsStack : inherit
BaseInfrastructure <|-- EventHandlerStack : inherit
Authoring a new feature E2E testImagine you're going to create E2E for Event Handler feature for the first time. Keep the following mental model when reading: graph LR
A["1. Define infrastructure"]-->B["2. Deploy/Delete infrastructure"]-->C["3.Access Stack outputs" ]
1. Define infrastructureWe use CDK as our Infrastructure as Code tool of choice. Before you start using CDK, you'd take the following steps:
class EventHandlerStack(BaseInfrastructure):
def create_resources(self):
functions = self.create_lambda_functions()
self._create_alb(function=functions["AlbHandler"])
...
def _create_alb(self, function: Function):
vpc = ec2.Vpc.from_lookup(
self.stack,
"VPC",
is_default=True,
region=self.region,
)
alb = elbv2.ApplicationLoadBalancer(self.stack, "ALB", vpc=vpc, internet_facing=True)
CfnOutput(self.stack, "ALBDnsName", value=alb.load_balancer_dns_name)
...
from aws_lambda_powertools.event_handler import ALBResolver, Response, content_types
app = ALBResolver()
@app.get("/todos")
def hello():
return Response(
status_code=200,
content_type=content_types.TEXT_PLAIN,
body="Hello world",
cookies=["CookieMonster", "MonsterCookie"],
headers={"Foo": ["bar", "zbr"]},
)
def lambda_handler(event, context):
return app.resolve(event, context) 2. Deploy/Delete infrastructure when tests runWe need to create a Pytest fixture for our new feature under This will instruct Pytest to deploy our infrastructure when our tests start, and delete it when they complete whether tests are successful or not. Note that this file will not need any modification in the future.
import pytest
from tests.e2e.event_handler.infrastructure import EventHandlerStack
@pytest.fixture(autouse=True, scope="module")
def infrastructure():
"""Setup and teardown logic for E2E test infrastructure
Yields
------
Dict[str, str]
CloudFormation Outputs from deployed infrastructure
"""
stack = EventHandlerStack()
try:
yield stack.deploy()
finally:
stack.delete() 3. Access stack outputs for E2E testsWithin our tests, we should now have access to the We can access any Stack Output using pytest dependency injection.
@pytest.fixture
def alb_basic_listener_endpoint(infrastructure: dict) -> str:
dns_name = infrastructure.get("ALBDnsName")
port = infrastructure.get("ALBBasicListenerPort", "")
return f"http://{dns_name}:{port}"
def test_alb_headers_serializer(alb_basic_listener_endpoint):
# GIVEN
url = f"{alb_basic_listener_endpoint}/todos"
... InternalsTest runner parallelizationBesides speed, we parallelize our end-to-end tests to ease asserting async side-effects may take a while per test too, e.g., traces to become available. The following diagram demonstrates the process we take every time you use graph TD
A[make e2e test] -->Spawn{"Split and group tests <br>by feature and CPU"}
Spawn -->|Worker0| Worker0_Start["Load tests"]
Spawn -->|Worker1| Worker1_Start["Load tests"]
Spawn -->|WorkerN| WorkerN_Start["Load tests"]
Worker0_Start -->|Wait| LambdaLayer["Lambda Layer build"]
Worker1_Start -->|Wait| LambdaLayer["Lambda Layer build"]
WorkerN_Start -->|Wait| LambdaLayer["Lambda Layer build"]
LambdaLayer -->|Worker0| Worker0_Deploy["Launch feature stack"]
LambdaLayer -->|Worker1| Worker1_Deploy["Launch feature stack"]
LambdaLayer -->|WorkerN| WorkerN_Deploy["Launch feature stack"]
Worker0_Deploy -->|Worker0| Worker0_Tests["Run tests"]
Worker1_Deploy -->|Worker1| Worker1_Tests["Run tests"]
WorkerN_Deploy -->|WorkerN| WorkerN_Tests["Run tests"]
Worker0_Tests --> ResultCollection
Worker1_Tests --> ResultCollection
WorkerN_Tests --> ResultCollection
ResultCollection{"Wait for workers<br/>Collect test results"}
ResultCollection --> TestEnd["Report results"]
ResultCollection --> DeployEnd["Delete Stacks"]
CDK CLI parallelizationFor CDK CLI to work with independent CDK Apps, we specify an output directory when synthesizing our stack and deploy from said output directory. flowchart TD
subgraph "Deploying distinct CDK Apps"
EventHandlerInfra["Event Handler CDK App"] --> EventHandlerSynth
TracerInfra["Tracer CDK App"] --> TracerSynth
EventHandlerSynth["cdk synth --out cdk.out/event_handler"] --> EventHandlerDeploy["cdk deploy --app cdk.out/event_handler"]
TracerSynth["cdk synth --out cdk.out/tracer"] --> TracerDeploy["cdk deploy --app cdk.out/tracer"]
end
We create the typical CDK
from tests.e2e.event_handler.infrastructure import EventHandlerStack
stack = EventHandlerStack()
stack.create_resources()
stack.app.synth() When we run E2E tests for a single feature or all of them, our total 8
drwxr-xr-x 18 lessa staff 576B Sep 6 15:38 event-handler
drwxr-xr-x 3 lessa staff 96B Sep 6 15:08 layer_build
-rw-r--r-- 1 lessa staff 32B Sep 6 15:08 layer_build.diff
drwxr-xr-x 18 lessa staff 576B Sep 6 15:38 logger
drwxr-xr-x 18 lessa staff 576B Sep 6 15:38 metrics
drwxr-xr-x 22 lessa staff 704B Sep 9 10:52 tracer classDiagram
class CdkOutDirectory {
feature_name/
layer_build/
layer_build.diff
}
class EventHandler {
manifest.json
stack_outputs.json
cdk_app_V39.py
asset.uuid/
...
}
class StackOutputsJson {
BasicHandlerArn: str
ALBDnsName: str
...
}
CdkOutDirectory <|-- EventHandler : feature_name/
StackOutputsJson <|-- EventHandler
Where:
Together, all of this allows us to use Pytest like we would for any project, use CDK CLI and its context methods (
|
Is this related to an existing feature request or issue?
#1009
Which AWS Lambda Powertools utility does this relate to?
Other
Summary
Build mechanism to run an end to end tests on Lambda Powertools library using real AWS Services (Lambda, DynamoDB).
Initially, tests can be run manually by maintainers on specific branch/commit_id to ensure expected feature works.
Tests should be triggered in GitHub but also maintainers/contributors should be able to run them in their local environment using their own AWS Account.
Use case
Providing mechanism to run end to end tests in real-live environment allows us to discover different class of problems we cannot find out otherwise by running unit tests or integration tests. For example, how the code base behaves during Lambda during cold and warm start, event source misconfiguration, IAM permissions, etc. It also allow us to validate integration with external services (CloudWatch Logs, X-ray) and ensure final real user experience is what we expect.
When it should be used
Examples
When integration test may be more appropriate instead
Integration testing would be a better fit when we can increase confidence by covering code base -> AWS Service(s). These can give us a faster feedback loop while reducing the permutations of E2E test we might need to cover the end user perspective, permissions, etc.
Examples
Proposal
Overview
What an E2E test would look like
Details
Github configuration
Test setup
Tests will follow a common directory structure to allow us to parallelize infrastructure creation and test execution for all feature groups. Tests within a feature group, say
tracer/
, are run sequentially and independently from other feature groups.Test fixtures will provide the necessary infrastructure and any relevant information tests need to succeed, e.g. Lambda Function ARN. Helper methods will also be provided to hide integration details and ease test creation.
If there are no more tests, infrastructure resources are automatically cleaned-up and results are synchronized and return to the user.
Directory Structure
Explanation
utils
directory has utilities to simplify writing tests, and an infrastructure module used for deploying infrastructureNote: In the first phase we may reuse infrastructure helper class in all test groups. If we decide we need more infrastructure configuration granularity per groups we will create sub-classes from core infra class and overwrite method responsible for describing infrastructure in CDK.
Reasoning
Keeping infrastructure creation module separate from test groups helps in reusing infrastructure along multiple tests within a feature group. It also allows us to benchmark tests and infra separately in the future. It also help contributor to write tests without expectation to dive deep into infra creation mechanism.
General Flow Diagram
What's in a test
Sample test using Pytest as our test runner
execute_lambda
fixture is responsible for deploying infrastructure and run our Lambda functions. They yield back their ARNs, execution time, etc., that can be used by helper functions, tests themselves, and maybe other fixtureshelpers.get_logs
functions fetch logs from CloudWatch LogsGIVEN/WHEN/THEN
structure as other parts of the projectOut of scope
Potential challenges
Multiple Lambda Layers creation
By using
pytest xdist
plugin we can easily parallelise tests per group and create infrastructure and run tests in parallel. This leads to Powertools lambda layer being created 3 times, which put a pressure on CPU/RAM/IOPS unnecessarily. We should optimise the solution to create layer only once and then run parallelised tests with reference to this layerCDK owned S3 bucket
As a CDK prerequisite, we bootstrap account for CDK usage by issuing
cdk boostrap
. Since s3 created by CDK doesn't have a lifecycle policy to remove old artefacts, we need to customize the default template used by CDKbootstrap
command, and attach it to the feature readme file with good description how to use it.Dependencies and Integrations
CDK
AWS CDK is responsible for synthesizing provided code into a CloudFormation stack, not deployment. We will use AWS SDK to deploy the generated CloudFormation stack instead.
During evaluation (see:
Alternative solutions
section), this approach combined the best compromise in ensuring a good deployment speed, infrastructure code readability and maintainability.Helper functions
Helper functions will be testing utilities to integrate with necessary AWS services tests need, hiding unnecessary complexity.
Examples
Alternative solutions
Use CDK CLI to deploy infrastructure directly, not custom code to synthesise cdk code, then deploy assets and run AWS CloudFormation deployment - dropped to avoid running CLI from subprocess (more
latency added) + to avoid additional node dependencies
Write AWS CodeBuild pipeline on AWS Account side that would run tests stored somewhere else outside of the project. No configuration and tests exposed in powertools repo - dropped due to initial assumption that we want end to end tests to be part of the project/increase visibility/allow contributors to run those tests during development phase on their own
Instead of using CloudFormation with multiple lambdas deployed I also considered using hot swap mechanism - either via CDK or direct call. Based on latency measured CloudFormation seems the fastest option. Attaching my findings.
Additional material
Acknowledgment
Powertools languages? i.e. Java, TypeScript
The text was updated successfully, but these errors were encountered: