-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #19 from WVAviator/development
v0.0.15
- Loading branch information
Showing
81 changed files
with
3,504 additions
and
894 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
name: Deploy | ||
|
||
on: | ||
workflow_dispatch: | ||
push: | ||
branches: | ||
- main | ||
|
||
jobs: | ||
deploy: | ||
runs-on: ubuntu-latest | ||
permissions: | ||
contents: write | ||
pages: write | ||
id-token: write | ||
steps: | ||
- uses: actions/checkout@v4 | ||
with: | ||
fetch-depth: 0 | ||
- name: Install latest mdbook | ||
run: | | ||
tag=$(curl 'https://api.github.com/repos/rust-lang/mdbook/releases/latest' | jq -r '.tag_name') | ||
url="https://github.com/rust-lang/mdbook/releases/download/${tag}/mdbook-${tag}-x86_64-unknown-linux-gnu.tar.gz" | ||
mkdir mdbook | ||
curl -sSL $url | tar -xz --directory=./mdbook | ||
echo `pwd`/mdbook >> $GITHUB_PATH | ||
- name: Build Book | ||
run: | | ||
cd docs | ||
mdbook build | ||
- name: Setup Pages | ||
uses: actions/configure-pages@v2 | ||
- name: Upload artifact | ||
uses: actions/upload-pages-artifact@v1 | ||
with: | ||
path: 'book' | ||
- name: Deploy to GitHub Pages | ||
id: deployment | ||
uses: actions/deploy-pages@v1 |
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
book |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
[book] | ||
authors = ["Alexander Durham (WVAviator)"] | ||
language = "en" | ||
multilingual = false | ||
src = "src" | ||
title = "Capti Documentation" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
# Summary | ||
|
||
[Introduction](./introduction.md) | ||
[Installation](./installation.md) | ||
|
||
- [Getting Started](./getting_started.md) | ||
- [Testing Basics](./writing_tests.md) | ||
|
||
- [Configuration](./configuration.md) | ||
- [Config File](./configuration/config.md) | ||
- [Setup Scripts](./configuration/scripts.md) | ||
- [Suite Configuration](./configuration/suites.md) | ||
- [Test Configuration](./configuration/tests.md) | ||
|
||
- [Matchers](./matchers.md) | ||
- [$exists](./matchers/exists.md) | ||
- [$absent](./matchers/absent.md) | ||
- [$regex](./matchers/regex.md) | ||
- [$length](./matchers/length.md) | ||
- [$empty](./matchers/empty.md) | ||
- [$includes](./matchers/includes.md) | ||
- [$not](./matchers/not.md) | ||
|
||
- [Variables](./variables.md) | ||
- [Complex Variables](./variables/complex.md) | ||
- [Extracting Variables](./variables/extracting.md) | ||
- [Environment Variables](./variables/env_variables.md) | ||
|
||
---- | ||
|
||
- [Contributing](./contributing.md) | ||
- [Reporting Issues](./reporting_issues.md) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
# Configuration |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
# Config File | ||
|
||
Capti config files enable you to set some settings that apply to all your test suites and infuence how test suites are processed. Many of the same configuration options are available on a per-suite basis as well. One useful component of the configuration is specifying scripts that should run before and after your tests - for example, starting your server. | ||
|
||
## Setup | ||
|
||
To create a config file, simply include a file named `capti-config.yaml` in your tests folder. This will automatically be parsed as a configuration for Capti. | ||
|
||
### Custom Config | ||
|
||
If you would instead prefer to name your config differently, or include the config in a location separate from your tests, you can specify the `--config` or `-c` argument when running Capti. For example, say you want to keep your config file in a separate directory in your project: | ||
|
||
``` | ||
. | ||
├── src/ | ||
│ └── index.ts | ||
├── tests/ | ||
│ └── hello.yaml | ||
├── config/ | ||
│ └── capti.yaml | ||
└── .gitignore | ||
``` | ||
|
||
You can configure your script to run | ||
|
||
```bash | ||
$ capti --path ./tests --config ./config/capti.yaml | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,75 @@ | ||
# Setup Scripts | ||
|
||
If you would like Capti to run commands or scripts before executing tests, whether for continuous integration workflows or just for convenience, you can specify scripts to run and optional `wait_until` parameter to determine when to continue with executing your tests or additional scripts. | ||
|
||
## Adding Scripts | ||
|
||
Setup scripts should be listed in sequential order under `before_all` or `after_all` in your config file. | ||
|
||
```yaml | ||
# tests/capti-config.yaml | ||
|
||
setup: | ||
before_all: | ||
- script: npm start | ||
description: start server | ||
wait_until: output 'Server running on port 3000' | ||
``` | ||
You can also additionally include `before_each` and `after_each` scripts in your individual test suites. Keep in mind that config-level scripts will all execute first. | ||
|
||
```yaml | ||
# tests/hello.yaml | ||
suite: /hello endpoint tests | ||
description: tests the various HTTP methods of the /hello endpoint | ||
setup: | ||
before_all: | ||
- script: echo 'starting hello suite' | ||
before_each: | ||
- script: ./scripts/reset-test-db.sh | ||
desription: reset test db | ||
wait_until: finished | ||
``` | ||
|
||
## Wait Until Options | ||
|
||
There are a few different options to choose from when deciding how to wait for your scripts to finish. By default, if `wait_until` is not included, execution will immediately continue with your script running in the background. This is not always what you want - for example when starting a server, you need to give it time to fully spin up before you start testing its endpoints. | ||
|
||
- `wait_until: finished` - This executes the command/script/program and waits synchronously for it to finish before proceeding. | ||
- `wait_until: 5 seconds` - This executes the script and then waits for the specified number of seconds before continuing. | ||
- `wait_until: port 3000` - This executes the script and waits for the specified port to open. If the port already has an open connection, the script will not execute. | ||
- `wait_until: output 'Server listening on port 3000` - This executes the script and then waits for the specified console output from your server. This is useful in some cases where the port may be open but the server is still not quite ready to take requests. | ||
|
||
## Examples | ||
|
||
Here is a simple cross-platform script to start a server and check that the port connection is open before proceeding. | ||
|
||
```yaml | ||
setup: | ||
before_all: | ||
- description: start app server | ||
script: NODE_ENV=test && npm start | ||
wait_until: port 3000 | ||
``` | ||
|
||
Here is an example from a project that uses Docker Compose to spin up both a database and a server. This Unix script checks if Docker Compose is already running, and if not - starts it. If it is already started, the `wait_until` output is still detected because of the call to `echo` the same output text. Note - make sure you update the output text to match your server's log message when it becomes ready. | ||
|
||
```yaml | ||
setup: | ||
before_all: | ||
- description: "Start db and server" | ||
wait_until: output "Listening on 3000" | ||
script: > | ||
if ! docker-compose ps | grep -q " Up "; then | ||
docker-compose up | ||
else | ||
echo "Listening on 3000" | ||
fi | ||
``` | ||
|
||
## Considerations | ||
|
||
Running setup scripts is entirely optional and merely provides a convenience for development. It is not necessary and you may instead choose to start your server manually first and then run Capti. | ||
|
||
> Why doesn't Capti just integrate directly with the server? | ||
> The goal of Capti is to provide the convenience of platform-agnostic test suites to run with your project, without directly coupling with your server (and behaving more like a user). If you want a framework that more tightly integrates with your server, you can look into a tool like supertest for NodeJS or MockMvc with Java/Spring. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
# Suite Configuration | ||
|
||
You have a few options to work with when configuring your test suites. You can define [setup scripts](./scripts.md) that execute before or after your tests, specify whether tests should run in parallel or sequentially, and specify static variables to be used throughout your test suite. | ||
|
||
## Setup Scripts | ||
|
||
See [setup scripts](./scripts.md) for more information on how to create setup scripts. These scripts execute command line programs or utilities before and after your tests, and can be useful if you want to do some specific configuration, like resetting a database, before testing. | ||
|
||
## Parallel Testing | ||
|
||
In general, you should prefer sequential testing over parallel testing. Capti test suites are each meant to simulate individual users interacting with your application, and a user would not typically be visiting multiple endpoints concurrently. | ||
|
||
> Note: When you have multiple test suites defined, the test _suites_ will always run concurrently. Each suite should be designed to simulate a user, and multiple users should be able to interact with your API concurrently and deterministically. Your suites should never rely on the state of other test suites. The individual _tests_ in a suite should, in the majority of cases, run sequentially. | ||
There are some cases in which the tests within a suite should run in parallel. One example would be if you are grouping together multiple tests of several different _public_ and _stateless_ endpoints. In these cases, you can specify in your test suites that all tests should run in parallel with `parallel: true`. | ||
|
||
```yaml | ||
suite: "Published recipes" | ||
description: "This suite tests multiple sequential accesses to the public endpoints returning published recipe information." | ||
parallel: true | ||
``` | ||
> Note: You cannot _extract_ variables when specifying `parallel: true`. Referencing an extracted variable in a later request is not possible when all requests run concurrently. | ||
|
||
## Variables | ||
|
||
You can define static variables to be used throughout the tests in your suites with the `variables:` mapping. These variables will expand to the specified value, sequence, or mapping when they are used. You can learn more in the [variables chapter](../variables.md). | ||
|
||
```yaml | ||
suite: "Create Recipe" | ||
description: "This suite involves creating a new recipe and fetching its information." | ||
variables: | ||
BASE_URL: http://localhost:3000 | ||
USER_EMAIL: [email protected] | ||
USER_PASSWORD: abc123! | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,69 @@ | ||
# Test Configuration | ||
|
||
In addition to defining a `request` and an `expect` mapping for each test, you can also define the following settings. | ||
|
||
## Print Response | ||
|
||
By setting `print_response: true` on your tests, the complete response status, headers, and body will be printed to the console when your test is run. This can be useful for debugging a failing test. | ||
|
||
```bash | ||
== Response: (Sign up) ======= | ||
|
||
Status: 200 | ||
|
||
Headers: | ||
▹ "x-powered-by": "Express" | ||
▹ "content-type": "application/json; charset=utf-8" | ||
▹ "content-length": "130" | ||
▹ "etag": "W/"82-l0Mhda3RFUb75lW/cRtznG5a9jI"" | ||
▹ "set-cookie": "connect.sid=s%3A0D8I6wmav5gUclgFPWA9u9WvCQ4oSNo7.u7xk7r6XkMbMdwsVtwArBZ1Q0DFT0pzo72tWRuh9JA8; Path=/; HttpOnly" | ||
▹ "date": "Sat, 17 Feb 2024 21:55:29 GMT" | ||
▹ "connection": "keep-alive" | ||
▹ "keep-alive": "timeout=5" | ||
|
||
Body: | ||
{ | ||
"email": "[email protected]", | ||
"displayName": "john-smith", | ||
"_id": "65d12b5182456857b2b9c8ce", | ||
"__v": 0, | ||
"id": "65d12b5182456857b2b9c8ce" | ||
} | ||
|
||
============================== | ||
``` | ||
|
||
|
||
## Should Fail | ||
|
||
Setting `should_fail: true` on your test, as expected, will assert that the test should fail. In most cases, however, you should be able to acheive this functionality with the right [matchers](../matchers.md) in your `expect` definition. | ||
|
||
This example uses the `should_fail` attribute to ensure the test does not pass with a successful status. | ||
|
||
```yaml | ||
- test: "Protected route" | ||
description: "Attempting to access protected route without signin or signup" | ||
should_fail: true | ||
request: | ||
method: GET | ||
url: "${BASE_URL}/recipes" | ||
expect: | ||
status: 2xx | ||
body: | ||
recipes: $exists | ||
``` | ||
However, a more declarative and idiomatic pattern would be to use matchers to assert the expected 400-level status code and absent request body information. This also enables asserting the correct error status code - in the case that the endpoint actually returns a 404 or 500-level status, the above test would pass, whereas this test would still detect the error and fail. | ||
```yaml | ||
- test: "Protected route" | ||
description: "Attempting to access protected route without signin or signup" | ||
request: | ||
method: GET | ||
url: "${BASE_URL}/recipes" | ||
expect: | ||
status: 403 | ||
body: | ||
recipes: $absent | ||
``` | ||
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
# Contributing | ||
|
||
If you're interested in contributing to Capti, please reach out! I'd love to collaborate on new ideas and the future of Capti tests. Feel free to help with any of the below items, or by [creating an issue](./reporting_issues.md). | ||
|
||
## Development | ||
|
||
If you're even just a little familiar with Rust, I'd love your help in developing components of Capti and fixing bugs. Especially if you are experienced in working with serde or reqwest. | ||
|
||
Additionally, if you'd like to help write some Node/Express/TypeScript code for the test app so that more scenarios can be used for examples and testing, that would be a huge help. | ||
|
||
The future of Capti may involve elements such as custom LSPs, syntax highlighting for YAML files, VSCode extensions, etc. If you have experience with any of these, I would appreciate your help. | ||
|
||
## Testing | ||
|
||
Want to help test Capti? If you happen to have a few REST APIs laying around that you've developed over the years, use Capti to create some tests for them. Try testing various use cases and see if you come across any limitations or issues. | ||
|
||
## Feature Suggestions | ||
|
||
Have an idea that might improve Capti? I'm all ears. I have a lot of ideas I'd like to implement in the future, but I'm always open to new and thoughtful suggestions - especially from users of the framework. |
Oops, something went wrong.