-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maintenance: run e2e tests in parallel #1512
Comments
If you run them with If we want to run them in parallel we could write a bash script that does that like This however will mix all the This is something we should keep our eyes on, however in most cases I'm running locally only specific groups/utilities and then run the full suite only on the repo. This makes it also easier to collaborate on results and troubleshooting when needed. |
Been looking into this while working on other issues and running the tests in parallel with a fairly crude method caused me to hit rate limits on deployment. Specifically, I ran three terminal sessions like this:
I wonder if this will be an issue if we try to run multiple packages / runtimes combos concurrently. |
I have been spending some time investigating this and wanted to share some updates as well as a low hanging/low effort way of mitigating this in the short term. The main way of running the integration tests in all the workspaces is via npm workspaces, and can be done with this command: npm run test:e2e -ws
# ... many logs
# duration 10m 55s This runs the tests one workspace at the time in the order that they appear in the As I mentioned in one of the comments above, another option that is enabled (but not documented) in our repo would be to use Lerna: npx lerna exec --no-bail --no-sort --stream --concurrency 8 -- npm run test:e2e
# alias npm run test:e2e:parallel
# ... many logs
lerna success exec Executed command in 11 packages: "npm run test:e2e"
# duration 3m 35s The concurrency can be adjusted via the respective flag, but putting 8 includes all the utilities that have tests to this date thus consisting in maximum concurrency. Running the tests this way cuts the time almost by 66%, however the experience is not great imo because the logs are all mixed (see below) and since they are streamed things like progress bars won't work but instead generate one line for each update: @aws-lambda-powertools/metrics: arn:aws:cloudformation:eu-west-1:12345678901:stack/Metrics-18-x86-77c72-BasicFeatures-Decorators/0c06b3f0-6c0b-11ee-8eaf-02b78ef81927
@aws-lambda-powertools/idempotency: ✅ Idempotency-18-x86-12cfb-makeHandlerIdempoten
@aws-lambda-powertools/tracer: arn:aws:cloudformation:eu-west-1: 12345678901:stack/Tracer-18-x86-a6e80-AllFeatures-AsyncDecorato/262b5c40-6c0b-11ee-9da3-0acf4e5b4307 Additionally, since we must use the
However if you want a quick way of running all the tests and are reasonably confident that they will pass (aka you don't have to consume the logs), then this is the way to do it. Both methods are now documented in the updated maintainers playbook that is being added in the linked PR. Note Below are some considerations on an alternative tool that I have been investigating but that I'm not yet set on. If you're not interested in reading about it, you can stop here. Investigating other toolsI have been looking at several tools used in monorepos and trying some of them. One of them, called I have tried wireit in a personal project that uses a monorepo with 3 workspaces and it looks promising, especially when it comes to caching and parallelizing scripts, however it requires a significant lift in terms of configuring and maintaining the wiring config. For example, imagine you want to have some shared npm scripts that live in the main flowchart LR
npm-run-frontend:deploy-->deploy
build-->exportCDKoutputs
subgraph frontend
deploy-->build
end
subgraph root
exportCDKoutputs
end
Blow an excerpt of the root {
"scripts": {
"exportCDKoutputs": "wireit"
},
"wireit": {
"exportCDKoutputs": {
"command": "ts-node ./scripts/exportCDKoutputs.ts",
"files": [
"./infrastructure/cdk.out/params.json",
".env"
],
"output": [
".env"
],
"clean": false
},
}
} and this is another from the {
"scripts": {
"build": "rimraf dist && mkdir dist && sh build.sh",
"deploy": "wireit"
},
"wireit": {
"deploy": {
"command": "export $(cat ../.env | xargs) && aws s3 sync dist s3://$WEB_STATIC_ASSETS_BUCKET_NAME --delete",
"dependencies": [
"../:exportCDKoutputs",
"build"
],
"WEB_STATIC_ASSETS_BUCKET_NAME": {
"external": true
}
}
}
} Another interesting aspect of wireit is the caching, as you can see it allows you to specify The reason why I'm still not sure about it however is the complexity of setting this up. The above scripts express one relationship between 3 scripts in one workspace, and as you can see it's very verbose and somewhat hard to reason about. Additionally, the caching features are enabled by default and it's very easy to shoot yourself in the foot if you don't know exactly what artefacts are being generated and how they are used by all the scripts in the dependency tree. Nevertheless, I think it's something we could potentially consider in the future, but at the time I don't think we have enough bandwidth to focus on this, especially because the potential time savings are still unclear. |
|
Summary
While we run e2e tests in a matrix:
it would be great to also be able to execute these tests during development.
Why is this needed?
To reduce feedback cycle time for e2e tests during development.
Which area does this relate to?
Automation
Solution
@dreamorosi already opened a discussion, and the pointers to RFCs. I don't have a solution yet, so any recommendation and contribution is appreciated.
Acknowledgment
Future readers
Please react with 👍 and your use case to help us understand customer demand.
The text was updated successfully, but these errors were encountered: