-
Catch regressions
-
Improve the code being tested
Structure, quality, security, performance, "shakes out" implicit assumptions, etc
-
Extremely instructive
Once you've fully tested a single function, you'll understand that code very well indeed.
-
Fun!
Yes, really! Don't believe me? Try it! ;)
As an example, to run all agent unit tests:
$ cd $GOPATH/src/github.com/kata-containers/kata-containers
$ cd src/agent
$ make test
-
Identify the full name of all the tests in the current package:
$ cargo test -- --list
-
Identify the full name of all tests in the
foo
"local crate" (sub-directory containing anotherCargo.toml
file):$ cargo test -p "foo" -- --list
-
Run a test in the current package in verbose mode:
# Example $ test="config::tests::test_get_log_level" $ cargo test "$test" -vv -- --exact --nocapture
$ cargo install cargo-tarpaulin
$ cd $GOPATH/src/github.com/kata-containers/kata-containers/src/agent
$ cargo -v tarpaulin --all-features --run-types AllTargets --count --force-clean -o Html
$ xdg-open "file://$PWD/tarpaulin-report.html"
-
To be testable, a function should:
- Not be "too long" (say >100 lines).
- Not be "too complex" (say >3 levels of indentation).
- Should return a
Result
or anOption
so error paths can be tested.
-
If functions don't conform, they need to be reworked (refactored) before writing tests.
- Some functions can't be fully tested.
- However, you can test the initial code that checks the parameter values (test error paths only).
-
KISS: Keep It Simple Stupid
You don't get extra points for cryptic code.
-
DRY: Don't Repeat Yourself
Make use of existing facilities (don't "re-invert the wheel").
-
Read the unit test advice document
-
Attack the function in all possible ways
-
Use the table driven approach:
- Simple
- Compact
- Easy to debug
- Makes boundary analysis easy
- Encourages functions to be testable
- Create a new "
tests
" module if necessary. - Give each test function a "
test_
" prefix. - Add the "
#[test]
" annotation on each test function.
- If you need to
use
(import) packages for the tests, only do it in thetests
module:use some_test_pkg::{foo, bar}; // <-- Not here #[cfg(test)] mod tests { use super::*; use some_test_pkg:{foo, bar}; // <-- Put it here }
- You can add test-specific dependencies in
Cargo.toml
:[dev-dependencies] serial_test = "0.5.1"
- Don't add in lots of error handling code: let the test panic!
// This will panic if the unwrap fails. // - NOT acceptable generally for production code. // - PERFECTLY acceptable for test code since: // - Keeps the test code simple. // - Rust will detect the panic and fail the test. let result = func().unwrap();
-
Comment out all tests in your
TestData
array apart from the failing test. -
Add temporary
println!("FIXME: ...")
statements in the code. -
Set
RUST_BACKTRACE=full
before runningcargo test
.
- Use a debugger (not normally necessary though):
# Disable optimisation $ RUSTFLAGS="-C opt-level=0" cargo test --no-run # Find the test binary $ test_binary=$(find target/debug/deps | grep "kata_agent-[a-z0-9][a-z0-9]*$" | tail -1) $ rust-gdb "$test_binary"
-
Always start a test with a "clean environment":
Create new set of objects / files / directories / etc for each test.
-
Mounts
- Linux allows mounts on top of existing mounts.
- Bind mounts and read-only mounts can be useful.
If a test runs successfully most of the time:
-
Review the test logic.
-
Add a
#[serial]
annotation on the test function Requires theserial_test
package in the[dev-dependencies]
section ofCargo.toml
.If this makes it work the test is probably sharing resources with another task (thread).
If a test works locally but fails in the CI, consider the following attributes of each environment (local and CI):
- The version of rust being used.
- The hardware architecture.
- Number (and spec) of the CPUs.
If in doubt, look at the "test artifacts" attached to the failing CI test.
-
Remember to check that the test runs locally:
- As a non-privileged user.
- As the
root
user (carefully!)
-
Run the static checker on your changes.
Checks formatting and many other things.
- Ask for help! ;)
What's wrong with this function?
fn foo(config: &Config, path_prefix: String, container_id: String, pid: String) -> Result<()> {
let mut full_path = format!("{}/{}", path_prefix, container_id);
let _ = remove_recursively(&mut full_path);
write_number_to_file(pid, full_path);
Ok(())
}
- No check that
path_prefix
,container_id
andpid
are not""
. - No check that
path_prefix
is absolute. - No check that
container_id
does not contain slashes / contains only valid characters. - Result of
remove_recursively()
discarded. remove_recursively()
may modifyfull_path
withoutfoo()
knowing!
- Why is
pid
not a numeric? - No check to ensure the PID is positive.
- No check to recreate any directories in the original
path_prefix
. write_number_to_file()
could fail so why doesn't it return a value?- The
config
parameter is unused.
Imagine if the caller managed to do this:
foo(config, "", "sbin/init", r#"#!/bin/sh\n/sbin/reboot"#);
What makes this function difficult to test?
fn get_user_id(username: String) -> i32 {
let line = grep_file(username, "/etc/passwd").unwrap();
let fields = line.split(':');
let uid = fields.nth(2).ok_or("failed").unwrap();
uid.parse::<i32>()
}
-
Unhelpful error message ("failed").
-
Panics on error! Return a
Result
instead! -
UID's cannot be negative so function should return an unsigned value.
-
Hard-coded filename.
This would be better:
const PASSWD_DB: &str = "/etc/passwd"; // Test code can now pass valid and invalid files! fn get_user_id(filename: String, username: String) -> i32 { // ... } let id = get_user_id(PASSWD_DB, username);
What's wrong with this test code?
let mut obj = Object::new();
// Sanity check
assert_eq!(obj.num, 0);
assert_eq!(obj.wibble, false);
// Test 1
obj->foo_method(7);
assert_eq!(obj.num, 7);
// Test 2
obj->bar_method(true);
assert_eq!(obj.wibble, true);
- The test code is "fragile":
- The 2nd test re-uses the object created in the first test.
-
We need a GH action to run the unit tests
Needs to fail PRs that decrease test coverage
by "x%".