Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: several nargo test improvements #6728

Merged
merged 39 commits into from
Dec 12, 2024
Merged
Changes from 8 commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
1c10349
Run tests according to the system's parallelism (for now)
asterite Dec 6, 2024
e4300ce
Run package tests (as a group) sequentially
asterite Dec 9, 2024
dd2d2b9
Extract `run_package_tests`
asterite Dec 9, 2024
6c85398
Add `--test-threads` option
asterite Dec 9, 2024
89d936c
Move ouptut to receiver side
asterite Dec 9, 2024
60b52b8
Show overall "running" message after collecting tests
asterite Dec 9, 2024
82cc0ad
Show list of failed tests at the end
asterite Dec 9, 2024
368feaf
Merge branch 'master' into ab/nargo-test-improvements
TomAFrench Dec 9, 2024
bf71960
Update tooling/nargo_cli/src/cli/test_cmd.rs
asterite Dec 9, 2024
d16ea3b
clippy
asterite Dec 9, 2024
e04a2b8
Merge branch 'master' into ab/nargo-test-improvements
asterite Dec 9, 2024
63e8b7c
Merge branch 'master' into ab/nargo-test-improvements
asterite Dec 9, 2024
f0e98d2
Compile packages in parallel
asterite Dec 9, 2024
38b62b7
Merge branch 'master' into ab/nargo-test-improvements
asterite Dec 9, 2024
dffc883
Use `current_num_threads` instead of `max_num_threads`
asterite Dec 10, 2024
3bb2ef2
Run all tests in parallel
asterite Dec 10, 2024
8a3c0da
Introduce TestRunner to avoid passing too many arguments
asterite Dec 10, 2024
776f541
Simplify display test status/report
asterite Dec 10, 2024
27f4ceb
Add some comments
asterite Dec 10, 2024
4e1a758
Merge branch 'master' into ab/nargo-test-improvements
asterite Dec 10, 2024
9e81a31
Remove accidental test
asterite Dec 10, 2024
ab041c5
Capture print output and show it together with their tests
asterite Dec 10, 2024
1be0680
Show test duration for tests that took more than 30 seconds
asterite Dec 10, 2024
7abb6d7
Clarify who the output belongs to
asterite Dec 11, 2024
37dd81f
Use default value in option
asterite Dec 11, 2024
738534f
Remove comment
asterite Dec 11, 2024
fb4f4e8
package_test_results -> packages_tests
asterite Dec 11, 2024
d9446d5
Break if the receiver is closed
asterite Dec 11, 2024
6341604
Use BTree map so we showd package output by package name
asterite Dec 11, 2024
6559b26
No need to track all test results
asterite Dec 11, 2024
94e53d2
Use `then_some`
asterite Dec 11, 2024
9693722
Clippy
asterite Dec 11, 2024
d5f8e2c
Show trailing "---" after stdout so the entire output is easier to see
asterite Dec 11, 2024
d89eb90
Merge branch 'master' into ab/nargo-test-improvements
asterite Dec 11, 2024
6add1d8
Let `collect_packages_tests` return `Result`
asterite Dec 11, 2024
a6ca811
Print package test count right before printing package tests
asterite Dec 11, 2024
8e40161
Use `catch_unwind`
asterite Dec 11, 2024
7967f33
Handle `panic!("...")` in catch_unwind handling
asterite Dec 11, 2024
cac485a
cargo fmt
asterite Dec 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
303 changes: 201 additions & 102 deletions tooling/nargo_cli/src/cli/test_cmd.rs
Original file line number Diff line number Diff line change
@@ -1,9 +1,15 @@
use std::{io::Write, path::PathBuf};
use std::{
collections::HashMap,
io::Write,
path::PathBuf,
sync::{mpsc, Mutex},
thread,
};

use acvm::{BlackBoxFunctionSolver, FieldElement};
use bn254_blackbox_solver::Bn254BlackBoxSolver;
use clap::Args;
use fm::FileManager;
use fm::{FileId, FileManager};
use nargo::{
insert_all_files_for_workspace_into_file_manager,
ops::TestStatus,
Expand All @@ -12,8 +18,11 @@ use nargo::{
};
use nargo_toml::{get_package_manifest, resolve_workspace_from_toml, PackageSelection};
use noirc_driver::{check_crate, CompileOptions, NOIR_ARTIFACT_VERSION_STRING};
use noirc_frontend::hir::{FunctionNameMatch, ParsedFiles};
use rayon::prelude::{IntoParallelIterator, ParallelBridge, ParallelIterator};
use noirc_frontend::{
hir::{FunctionNameMatch, ParsedFiles},
parser::ParserError,
ParsedModule,
};
use termcolor::{Color, ColorChoice, ColorSpec, StandardStream, WriteColor};

use crate::{cli::check_cmd::check_crate_and_report_errors, errors::CliError};
Expand Down Expand Up @@ -49,13 +58,22 @@ pub(crate) struct TestCommand {
/// JSON RPC url to solve oracle calls
#[clap(long)]
oracle_resolver: Option<String>,

/// Number of threads used for running tests in parallel
#[clap(long)]
test_threads: Option<usize>,
}
asterite marked this conversation as resolved.
Show resolved Hide resolved

struct Test<'a> {
name: String,
runner: Box<dyn FnOnce() -> TestStatus + Send + 'a>,
}

pub(crate) fn run(args: TestCommand, config: NargoConfig) -> Result<(), CliError> {
let toml_path = get_package_manifest(&config.program_dir)?;
let default_selection =
if args.workspace { PackageSelection::All } else { PackageSelection::DefaultOrAll };
let selection = args.package.map_or(default_selection, PackageSelection::Selected);
let selection = args.package.clone().map_or(default_selection, PackageSelection::Selected);
let workspace = resolve_workspace_from_toml(
&toml_path,
selection,
Expand All @@ -77,31 +95,25 @@ pub(crate) fn run(args: TestCommand, config: NargoConfig) -> Result<(), CliError
None => FunctionNameMatch::Anything,
};

// Configure a thread pool with a larger stack size to prevent overflowing stack in large programs.
// Default is 2MB.
let pool = rayon::ThreadPoolBuilder::new().stack_size(4 * 1024 * 1024).build().unwrap();
let test_reports: Vec<Vec<(String, TestStatus)>> = pool.install(|| {
workspace
.into_iter()
.par_bridge()
.map(|package| {
run_tests::<Bn254BlackBoxSolver>(
&workspace_file_manager,
&parsed_files,
package,
pattern,
args.show_output,
args.oracle_resolver.as_deref(),
Some(workspace.root_dir.clone()),
Some(package.name.to_string()),
&args.compile_options,
)
})
.collect::<Result<_, _>>()
})?;
let test_report: Vec<(String, TestStatus)> = test_reports.into_iter().flatten().collect();
let num_threads = args
.test_threads
.unwrap_or_else(|| std::thread::available_parallelism().ok().map(Into::into).unwrap_or(1));
let mut test_reports = Vec::new();
asterite marked this conversation as resolved.
Show resolved Hide resolved

for package in workspace.into_iter() {
let test_report = run_package_tests(
package,
&workspace_file_manager,
&parsed_files,
pattern,
&args,
&workspace,
num_threads,
)?;
test_reports.extend(test_report);
}

if test_report.is_empty() {
if test_reports.is_empty() {
match &pattern {
FunctionNameMatch::Exact(pattern) => {
return Err(CliError::Generic(
Expand All @@ -116,54 +128,120 @@ pub(crate) fn run(args: TestCommand, config: NargoConfig) -> Result<(), CliError
};
}

if test_report.iter().any(|(_, status)| status.failed()) {
if test_reports.iter().any(|(_, status)| status.failed()) {
Err(CliError::Generic(String::new()))
} else {
Ok(())
}
}

#[allow(clippy::too_many_arguments)]
fn run_tests<S: BlackBoxFunctionSolver<FieldElement> + Default>(
file_manager: &FileManager,
parsed_files: &ParsedFiles,
fn run_package_tests(
package: &Package,
file_manager: &FileManager,
parsed_files: &HashMap<FileId, (ParsedModule, Vec<ParserError>)>,
pattern: FunctionNameMatch<'_>,
args: &TestCommand,
workspace: &nargo::workspace::Workspace,
num_threads: usize,
) -> Result<Vec<(String, TestStatus)>, CliError> {
let package_name = package.name.to_string();
let tests = collect_package_tests::<Bn254BlackBoxSolver>(
file_manager,
parsed_files,
package,
pattern,
args.show_output,
args.oracle_resolver.as_deref(),
Some(workspace.root_dir.clone()),
package_name.clone(),
&args.compile_options,
)?;
let num_tests = tests.len();

let plural = if num_tests == 1 { "" } else { "s" };
println!("[{}] Running {num_tests} test function{plural}", package.name);

let mut test_report = Vec::new();

let (sender, receiver) = mpsc::channel();
let iter = Mutex::new(tests.into_iter());
thread::scope(|scope| {
TomAFrench marked this conversation as resolved.
Show resolved Hide resolved
// Start worker threads
for _ in 0..num_threads {
// Specify a larger-than-default stack size to prevent overflowing stack in large programs.
// (the default is 2MB)
thread::Builder::new()
.stack_size(4 * 1024 * 1024)
.spawn_scoped(scope, || loop {
// Get next test to process from the iterator.
let Some(test) = iter.lock().unwrap().next() else {
break;
};
let test_status = (test.runner)();

// It's fine to ignore the result of sending.
// If the receiver has hung up, everything will wind down soon anyway.
let _ = sender.send((test.name, test_status));
})
.unwrap();
}

for (test_name, test_status) in receiver.iter().take(num_tests) {
display_test_status(
&test_name,
&package_name,
&test_status,
file_manager,
&args.compile_options,
);
test_report.push((test_name, test_status));
}
});

display_test_report(&package_name, &test_report)?;

Ok(test_report)
}

#[allow(clippy::too_many_arguments)]
fn collect_package_tests<'a, S: BlackBoxFunctionSolver<FieldElement> + Default>(
file_manager: &'a FileManager,
parsed_files: &'a ParsedFiles,
package: &'a Package,
fn_name: FunctionNameMatch,
show_output: bool,
foreign_call_resolver_url: Option<&str>,
foreign_call_resolver_url: Option<&'a str>,
root_path: Option<PathBuf>,
package_name: Option<String>,
compile_options: &CompileOptions,
) -> Result<Vec<(String, TestStatus)>, CliError> {
package_name: String,
compile_options: &'a CompileOptions,
) -> Result<Vec<Test<'a>>, CliError> {
let test_functions =
get_tests_in_package(file_manager, parsed_files, package, fn_name, compile_options)?;

let count_all = test_functions.len();

let plural = if count_all == 1 { "" } else { "s" };
println!("[{}] Running {count_all} test function{plural}", package.name);

let test_report: Vec<(String, TestStatus)> = test_functions
.into_par_iter()
let tests: Vec<Test> = test_functions
.into_iter()
.map(|test_name| {
let status = run_test::<S>(
file_manager,
parsed_files,
package,
&test_name,
show_output,
foreign_call_resolver_url,
root_path.clone(),
package_name.clone(),
compile_options,
);

(test_name, status)
let test_name_copy = test_name.clone();
let root_path = root_path.clone();
let package_name_clone = package_name.clone();
let runner = Box::new(move || {
run_test::<S>(
file_manager,
parsed_files,
package,
&test_name,
show_output,
foreign_call_resolver_url,
root_path,
package_name_clone.clone(),
compile_options,
)
});
Test { name: test_name_copy, runner }
})
.collect();

display_test_report(file_manager, package, compile_options, &test_report)?;
Ok(test_report)
Ok(tests)
}

#[allow(clippy::too_many_arguments)]
Expand All @@ -175,7 +253,7 @@ fn run_test<S: BlackBoxFunctionSolver<FieldElement> + Default>(
show_output: bool,
foreign_call_resolver_url: Option<&str>,
root_path: Option<PathBuf>,
package_name: Option<String>,
package_name: String,
compile_options: &CompileOptions,
) -> TestStatus {
// This is really hacky but we can't share `Context` or `S` across threads.
Expand All @@ -198,7 +276,7 @@ fn run_test<S: BlackBoxFunctionSolver<FieldElement> + Default>(
show_output,
foreign_call_resolver_url,
root_path,
package_name,
Some(package_name),
compile_options,
)
}
Expand All @@ -220,60 +298,81 @@ fn get_tests_in_package(
.collect())
}

fn display_test_report(
fn display_test_status(
test_name: &String,
package_name: &String,
test_status: &TestStatus,
file_manager: &FileManager,
package: &Package,
compile_options: &CompileOptions,
test_report: &[(String, TestStatus)],
) -> Result<(), CliError> {
) {
let writer = StandardStream::stderr(ColorChoice::Always);
let mut writer = writer.lock();

for (test_name, test_status) in test_report {
write!(writer, "[{}] Testing {test_name}... ", package.name)
.expect("Failed to write to stderr");
writer.flush().expect("Failed to flush writer");

match &test_status {
TestStatus::Pass { .. } => {
writer
.set_color(ColorSpec::new().set_fg(Some(Color::Green)))
.expect("Failed to set color");
writeln!(writer, "ok").expect("Failed to write to stderr");
}
TestStatus::Fail { message, error_diagnostic } => {
writer
.set_color(ColorSpec::new().set_fg(Some(Color::Red)))
.expect("Failed to set color");
writeln!(writer, "FAIL\n{message}\n").expect("Failed to write to stderr");
if let Some(diag) = error_diagnostic {
noirc_errors::reporter::report_all(
file_manager.as_file_map(),
&[diag.clone()],
compile_options.deny_warnings,
compile_options.silence_warnings,
);
}
}
TestStatus::Skipped { .. } => {
writer
.set_color(ColorSpec::new().set_fg(Some(Color::Yellow)))
.expect("Failed to set color");
writeln!(writer, "skipped").expect("Failed to write to stderr");
}
TestStatus::CompileError(err) => {
write!(writer, "[{}] Testing {test_name}... ", package_name)
.expect("Failed to write to stderr");
writer.flush().expect("Failed to flush writer");

match &test_status {
TestStatus::Pass { .. } => {
writer
.set_color(ColorSpec::new().set_fg(Some(Color::Green)))
.expect("Failed to set color");
writeln!(writer, "ok").expect("Failed to write to stderr");
}
TestStatus::Fail { message, error_diagnostic } => {
writer
.set_color(ColorSpec::new().set_fg(Some(Color::Red)))
.expect("Failed to set color");
writeln!(writer, "FAIL\n{message}\n").expect("Failed to write to stderr");
if let Some(diag) = error_diagnostic {
noirc_errors::reporter::report_all(
file_manager.as_file_map(),
&[err.clone()],
&[diag.clone()],
compile_options.deny_warnings,
compile_options.silence_warnings,
);
}
}
writer.reset().expect("Failed to reset writer");
TestStatus::Skipped { .. } => {
writer
.set_color(ColorSpec::new().set_fg(Some(Color::Yellow)))
.expect("Failed to set color");
writeln!(writer, "skipped").expect("Failed to write to stderr");
}
TestStatus::CompileError(err) => {
noirc_errors::reporter::report_all(
file_manager.as_file_map(),
&[err.clone()],
compile_options.deny_warnings,
compile_options.silence_warnings,
);
}
}
writer.reset().expect("Failed to reset writer");
}

fn display_test_report(
package_name: &String,
test_report: &[(String, TestStatus)],
) -> Result<(), CliError> {
let writer = StandardStream::stderr(ColorChoice::Always);
let mut writer = writer.lock();

let failed_tests: Vec<_> = test_report
.iter()
.filter_map(|(name, status)| if status.failed() { Some(name) } else { None })
.collect();

if !failed_tests.is_empty() {
writeln!(writer).expect("Failed to write to stderr");
writeln!(writer, "[{}] Failures:", package_name).expect("Failed to write to stderr");
for failed_test in failed_tests {
writeln!(writer, " {}", failed_test).expect("Failed to write to stderr");
}
writeln!(writer).expect("Failed to write to stderr");
}

write!(writer, "[{}] ", package.name).expect("Failed to write to stderr");
write!(writer, "[{}] ", package_name).expect("Failed to write to stderr");

let count_all = test_report.len();
let count_failed = test_report.iter().filter(|(_, status)| status.failed()).count();
Expand Down
Loading