-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sdterror output from tracing running tests #1069
Comments
Hmmm... I think that having the |
@da2ce7 I would add a new config option to set the tracing level for the tests in the toml file. And still have the Regarding having tracing on tests, I think it can be useful for development, but I think it is better if we just show an OK or FAIL result when running them, I will do some research on the topic anyway. |
Hi @da2ce7 and @mario-nt, the reason those tests are returning a 500 error is to keep compatibility with the old API. Before migrating to Axum, the previous API returned a 500 for all these cases. I decided not to break the contract in the migration and wait until the new API. These are the cases in tests:
The tests are all from the API:
I think we should do what we agreed on in the meeting. We should:
For example, in this test: #[tokio::test]
async fn should_not_authenticate_requests_when_the_token_is_empty() {
INIT.call_once(|| {
tracing_stderr_init(LevelFilter::ERROR);
});
let env = Started::new(&configuration::ephemeral().into()).await;
let response = Client::new(env.get_connection_info())
.get_request_with_query("stats", Query::params([QueryParam::new("token", "")].to_vec()))
.await;
assert_token_not_valid(response).await;
env.stop().await;
} We can change the test to something like: use tracing::Level;
use tracing_subscriber::fmt::TestWriter;
use tracing_subscriber::prelude::*;
#[tokio::test]
async fn should_not_authenticate_requests_when_the_token_is_empty() {
let log_writer = TestWriter::new();
let subscriber = tracing_subscriber::fmt()
.with_writer(log_writer.clone())
.with_max_level(Level::ERROR)
.finish();
let _guard = tracing::subscriber::set_default(subscriber);
let env = Started::new(&configuration::ephemeral().into()).await;
let response = Client::new(env.get_connection_info())
.get_request_with_query("stats", Query::params([QueryParam::new("token", "")].to_vec()))
.await;
assert_token_not_valid(response).await;
env.stop().await;
// Ensure the log contains the expected error
let logs = log_writer.to_string();
assert!(
logs.contains("response failed, classification: Status code: 500 Internal Server Error"),
"Expected error log was not found in the captured logs: {}",
logs
);
} We can start with the most straightforward implementation and see later whether we can extract this into a test helper or move the logic to the I would fix all tests with this pattern to remove the errors. There are only a few of them. |
The error is this:
|
Hi @mario-nt, I'm also working on this. The |
The errors come from the tower_http's struct TracerLayer that uses the DefaultOnFailure struct. We can customize the error calling #[allow(clippy::needless_pass_by_value)]
#[instrument(skip(tracker, access_tokens))]
pub fn router(tracker: Arc<Tracker>, access_tokens: Arc<AccessTokens>) -> Router {
let router = Router::new();
let api_url_prefix = "/api";
let router = v1::routes::add(api_url_prefix, router, tracker.clone());
let state = State { access_tokens };
router
.layer(middleware::from_fn_with_state(state, v1::middlewares::auth::auth))
.route(&format!("{api_url_prefix}/health_check"), get(health_check_handler))
.layer(CompressionLayer::new())
.layer(SetRequestIdLayer::x_request_id(MakeRequestUuid))
.layer(PropagateHeaderLayer::new(HeaderName::from_static("x-request-id")))
.layer(
TraceLayer::new_for_http()
.make_span_with(DefaultMakeSpan::new().level(Level::INFO))
.on_request(|request: &Request<axum::body::Body>, _span: &Span| {
let method = request.method().to_string();
let uri = request.uri().to_string();
let request_id = request
.headers()
.get("x-request-id")
.map(|v| v.to_str().unwrap_or_default())
.unwrap_or_default();
tracing::span!(
target: API_LOG_TARGET,
tracing::Level::INFO, "request", method = %method, uri = %uri, request_id = %request_id);
})
.on_response(|response: &Response, latency: Duration, _span: &Span| {
let status_code = response.status();
let request_id = response
.headers()
.get("x-request-id")
.map(|v| v.to_str().unwrap_or_default())
.unwrap_or_default();
let latency_ms = latency.as_millis();
tracing::span!(
target: API_LOG_TARGET,
tracing::Level::INFO, "response", latency = %latency_ms, status = %status_code, request_id = %request_id);
})
.on_failure(
|failure_classification: ServerErrorsFailureClass, _latency: Duration, _span: &Span| {
tracing::error!(target: API_LOG_TARGET, "response failed, classification: {failure_classification}");
},
),
)
.layer(SetRequestIdLayer::x_request_id(MakeRequestUuid))
.layer(
ServiceBuilder::new()
// this middleware goes above `TimeoutLayer` because it will receive
// errors returned by `TimeoutLayer`
.layer(HandleErrorLayer::new(|_: BoxError| async { StatusCode::REQUEST_TIMEOUT }))
.layer(TimeoutLayer::new(DEFAULT_TIMEOUT)),
)
} I would override it even if we print the same message to avoid depending on the third-party error when writing assertions in tests. It's also easier to find the code when you see the error in logs. The extra code: .on_failure(
|failure_classification: ServerErrorsFailureClass, _latency: Duration, _span: &Span| {
tracing::error!(target: API_LOG_TARGET, "response failed, classification: {failure_classification}");
},
), |
I've opened a new PR, which is only one step toward fixing this issue. |
There are new errors that were not included in the initial list, probably because they were added after opening this issue.
|
I've decided to create a new issue for adding assertions in tests for logged errors and close this PR with this PR because the errors will not be shown anymore after merging it. |
d11ab32 test: capture logs in tests (Jose Celano) Pull request description: Relates to: - #1069 - #1148 This feature will be used in the future to write assertions about logs. It also changes when we show logs running tests. If you run tests with: ```console cargo test ``` logs won't be shown. If you want to see the logs you have to execute tests with: ```console cargo test -- --nocapture ``` ACKs for top commit: josecelano: ACK d11ab32 Tree-SHA512: d4a11d899b7c0bd005c5e3b518eec89487ec12eac9535fcb5dca8684f1ea4075e706d51178df9e67d01b359c0221d0926e77159797d9d9c53083f4b56a9ee904
When you run the tests you see some errors in the stderror
You can run the tests with
command 2>/dev/null
, but we should disable that output in tracing when running the tests.The text was updated successfully, but these errors were encountered: