-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run Julia tests with resource limits, with worker process, and collate the results #24
Conversation
1ffffa9
to
cfda204
Compare
Would bolding the text in the table for those files that somehow fail make sense? Might make it easier to find them in the table and get an overview of the "density" of failures. |
Codecov Report
@@ Coverage Diff @@
## master #24 +/- ##
=====================================
Coverage 0% 0%
=====================================
Files 6 6
Lines 1115 1117 +2
=====================================
- Misses 1115 1117 +2
Continue to review full report at Codecov.
|
In the long run, I agree, but even if we discount the "aborted" column, a sizable majority of tests have some problem that Base itself does not. So most lines would be bolded. (The only tests that may not need attention are those with zeros in the "Fails" and "Errors" columns.) |
OK to merge? Given that I've advertised #13 I'd like to make this runnable by others. |
Some tests cause segfaults, so this insulates the rest from a few bad apples. As a nice extra benefit, it allows speedup by parallelism.
I now have the suspicion that anonymous functions are the overwhelming reason for the tests with |
Feel free to merge this and future PRs at will :) |
I was just wondering how strongly you felt about the bold text suggestion. |
|
This makes it much more feasible to run "all" of Julia's tests. The key advance is the ability to run a specified maximum number of statements in the interpreter, and abort early when that number is reached. Choosing a number of statements, rather than execution time, should make it easier to compare results on machines with different performance. On my laptop, the default settings run all of Julia's tests in < 5 minutes. This is not a sign that the interpreter is really efficient (it's not), just that it aborts a lot of slow tests.
Sample output is now visible in #13.