-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize tests #27
Comments
Give either of these a try in a PR and we can discuss improvements. I suspect consolidation of files will be a best approach. I split them up initially because I don't want earlier syntax to influence later syntax. It's hard to test individual pieces if we have to worry about muddying across sections. I suspect your recent changes have helped with that. Maybe the kitchen sink file could be a new test file? |
(1) I have some ideas for and could make as a PR. (2) would be a bit steep for me as I have almost zero Rust knowledge atm - been wanting to learn Rust for years, but haven't had enough of a window to do it (& still don't yet, unfortunately
Already done 😉 in fact, regarding consolidating test files, first was going to check whether the addition of |
Oh, wonderful. I missed you added it to the test folder. Hell yeah, if you feel like there aren't any overlaps of syntax (places where earlier forms affects the syntax highlighting of the current form), cut out any test files that you think overlap. |
On 2, do we know how much of the build time is Vim starting and how much is the actual conversion? Probably worth trying to use If we DO end up having to parallelize it, one option is GNU |
Good point. I'll look into approximate benchmarking of both ways.
... correct me if I'm wrong, but IIRC from one of the times I started trying to learn Rust, isn't parallelism one of the things Rust is particularly good at? So the test runner could itself perform the parallelism without any additional prerequisites? |
And in trying to get a baseline benchmark... Are we sure ANY of the slowness is due to "the use of vim's built-in #!/usr/bin/env python3
import os, subprocess, sys, tempfile, time
tmpdir = tempfile.mkdtemp()
print(tmpdir)
ts=time.time()
for fn in sys.argv[1:]:
env={
'CASE': fn,
'OUTPUT': os.path.join(tmpdir, os.path.basename(fn[:-5])+'.output.html'),
}
subprocess.run(['vim', '-S', 'convert-to-html.vim'], env=os.environ.copy() | env, check=True)
te=time.time()
print(f'Total time: {te - ts} seconds.') ... like this... cd vim-just/tests
~/benchmark-tohtml.py cases/*.just ... is consistently reporting times around 1-2 seconds - for the entire test suite, converted sequentially. 😮 And inspecting the temporary directory, it did in fact convert all to html as expected. Sticking a bunch of extra ... And this fixes it? - diff --git a/tests/src/main.rs b/tests/src/main.rs
index 916f22a..480cb4c 100644
--- a/tests/src/main.rs
+++ b/tests/src/main.rs
@@ -59,9 +59,8 @@ fn main() -> io::Result<()> {
.env("CASE", case)
.env("OUTPUT", &output)
.env("HOME", env::current_dir().unwrap())
- .output()
- .unwrap()
- .status;
+ .status()
+ .unwrap();
if !status.success() {
panic!("Vim failed with status: {}", status);
It improves performance so much that it, alone, maybe enough to call this issue fixed IF it's a good answer (I have no actual idea what I'm doing here? 😨 ) |
Oooooh interesting! Is .output processing a bunch of text noise from vim? Might be the reason. If that's reducing the time by that much, open a PR with it! |
Improving performance of the test suite has been completed in #29 .
This, and consolidating some existing test cases, are likely still worth doing. It's no longer for performance reasons though, so best address it in another issue. |
Currently it is rather slow to run the test suite. And it will get even slower as we add more test cases.
Two possible ideas for how we could speed it up:
Since each test file seems to take about the same amount of time, consider consolidating some of the existing test cases into fewer files.
Change the test runner to perform the
convert-to-html
s in parallel (with # of parallel jobs = # of physical CPU cores?), and only after all outputs are generated, only then go through andcolordiff
each output compared to its expected. Thecolordiff
runs should stay sequential, to avoid any potential for confusion due to out-of-order output.Maybe worth doing both?
The text was updated successfully, but these errors were encountered: