You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I loop the test 1000 times, it fails anywhere from 5% to 17% of the time on a Raspberry Pi2, and 0% to 0.7% of the time in an Ubuntu VM.
In the meantime, is it possible for me (on my local repo) to somehow mark this test to be ignored? I have a script that won't build the install package if any of the tests fail.
The text was updated successfully, but these errors were encountered:
I at least narrowed down why it fails erroneously. In http_client_asio.cpp, eventually the "handle_connect" is called. During (or maybe after) the handle_connect call, it's expected to throw an exception for timing out (std::errc::timed_out).
However, if the timeout occurs before the "handle_connect" function is called, then the error_code passed to it is already set to: boost::asio::error::operation_aborted; and thus "Request canceled by user." is thrown instead.
Essentially, what I believe is happening is the 500 microseconds that's coded into the test is enough time, most of the time, for the test to be valid. However, there are cases (high load, or slow CPU) where the http_client "times out" before the point where it would thrown the expected exception.
One can easily replicate this by adding a "sleep" to the "handle_resolve" function, and then play with the hard coded timeout value in "connections_and_errors.cpp" - if the timeout in connections_and_errors is greater than the sleep time added to handle_resolve, the tests will pass. If the timeout is shorter than the sleep time, then they will fail every time due to the operation_aborted exception being thrown.
The connections_and_errors:request_timeout_microsecond is failing intermittently.
I'm using commit: 91f66c6 (v2.10.10)
If I loop the test 1000 times, it fails anywhere from 5% to 17% of the time on a Raspberry Pi2, and 0% to 0.7% of the time in an Ubuntu VM.
In the meantime, is it possible for me (on my local repo) to somehow mark this test to be ignored? I have a script that won't build the install package if any of the tests fail.
The text was updated successfully, but these errors were encountered: