-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement --watch #502
Implement --watch #502
Conversation
👍 That is what I would expect. If I launch with watch and there are initial problems, I should just be able to fix them and save. What is the argument for the behavior in this PR? |
I agree it should not be configurable, but is 10ms long enough? What happens in popular IDE's when you have multiple large files open and you The easiest solution might just be to go with 10 ms and wait for reported problems. |
Maybe try to detect if we are wrapped with NYC and issue a warning in watch mode? |
RE: include / exclude patterns: Might we just provide a single config option with a defaults:
with exclusion
|
No particular argument, that's why I brought it up :) Will change.
I don't know either. Note that the current implementation is somewhat naive in that it debounces again for the full 10ms if a change happened in the interval. Increasing the delay would make that hurt more, but some extra code complexity could deal with that. It'd be good if tests rerun within 150ms if possible though, so that it feels instantaneous.
Yea.
I agree with excluding those, but note that you may have watch in one console and run coverage in another. I don't think we should bail if you run watch inside of nyc, or a CI environment for that matter.
Yes that was my thinking too. That's how the files are matched. |
Currently the process exits if a test file cannot be run (an Getting sensible log output for these scenarios is difficult. Ideally we see as much output as possible, ending with the number of passes, fails, rejections and exceptions, including the I'm proposing to handle these I'm adding a commit to this effect. Apologies if it's a tad controversial 😄 |
AVAError was intended to be used to avoid printing stack traces when they were unlikely to be helpful (i.e. the "no tests in file", or " forgot to require AVA" errors). I don't think this change will be that controversial |
.catch(function (err) { | ||
// Don't swallow exceptions. Note that any expected error should already | ||
// have been logged. | ||
process.nextTick(function () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why in the next tick?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, nextTick
is deprecated since 0.10.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heh, old habits die hard. Will change to setImmediate()
. It's needed to rethrow as an uncaught exception.
This looks very promising. I'm happy about how non-intrusive this is. Great work @novemberborn :)
How about
Do you know why |
The issue with that is that the following is ambiguous:
Does it mean watch only |
That's why we define types here, to make it non-ambigious. Also why I much prefer using |
That won't help here. They are both strings.
I agree. But unless we parse args on our own, I don't think we can enforce that (nor do I think we should - it would confuse the people who prefer spaces). |
Easier to explain with code: var minimist = require('minimist');
console.log(minimist(['ava', '--watch', 'src/*.js', 'test/*.js'], {string: ['watch']}));
/*
{
_: ['ava', 'test/*.js'],
watch: 'src/*.js'
}
*/ |
I get that, what I want to be able to do is run
Only run test/foo.js, but do it in watch mode. Having to specify:
Would be annoying. I am probably going to set up what I want watched in Maybe we add a |
|
Ok, good point. Was just trying to not have to introduce yet another flag... |
Will investigate.
Yes, I quite like |
OK I found the introduction of It's muscle memory for me but I don't really care either way. |
OK, status update. Summary of changes:
Still to do:
Future enhancements:
|
Added |
Re https://github.com/sindresorhus/ava/pull/502/files#diff-4fdccd335c2e54ebb16284a24389ce67R294, I don't think we should wait for the previous tests to finish, if new ones are in a queue. If developer made a code update, he obviously does not care whether tests fail/pass before that last update. He wants to see the immediate test log after his change, right? Impressive work, @novemberborn! |
If I modify a test that is currently running perhaps I'd like that to be interrupted and restarted. We could see if there's a strong need for that. Alternatively AVA could simply start a second run of the test but I imagine that would make for some very confusing log output. Right now the API runs multiple test files and retains state during those runs. That state can only be reset after a run has finished, which is why the watcher has to wait. I don't think that's so bad, you'll keep an eye on your test output and wait for it to stop changing.
Thanks! 😄 |
And finished the final to-do, Force-pushed to include some bugfixes. Aside from further feedback this is good to go 🚀 |
I meant, we could cancel the "old" test run (so that we no longer get results/callbacks/etc from it) and start a new one along with a new output. |
Yea that'd be an option. Though that would probably preclude |
We can leave it as a feature request/improvement, so that this PR gets in faster ;) Don't want that amount of hard work to just wait around! |
util.inherits(Api, EventEmitter); | ||
module.exports = Api; | ||
|
||
Api.prototype._reset = function () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use a new Api
instance each time instead of resetting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good point.
Always prefix the test title when tests are run from specific files, even if only one file is run. This occurs when the watcher reruns tests. The log should disambiguate between not just the currently running tests but *all* tests that have run previously.
commondir doesn't strip the basename if only a single path is presented. This is an issue now that test titles include the file path even if only a single file is being tested.
I wrote common-path-prefix without knowing of commondir (npm search FTW). It explicitly excludes the base component so might be preferable over workarounds.
Continue running other tests and let the reporters log the exceptions. The final results should indicate that failures occurred. It's quite useful when using --watch to see output from other tests, even if a particular test could not be run. AvaErrors don't have stack traces so modify reporters to log them appropriately. Rethrow any remaining errors coming out of Api#run() asynchronously so they become uncaught exceptions. They should be due to AVA bugs anyway so don't have to be pretty.
Inspired by nodemon, rerun all tests when 'rs' is entered on stdin.
Bump tap to get beforeEach and promise support. Use proxyquire to stub chokidar. Install lolex to get a next() method to advance the clock. Sinon 1.17 bundles an older version for its fake timers. These tests yield 100% code coverage for watcher.js!
Users may need to fine-tune the files and sources patterns for efficient watching. This enables an 'ava:watcher' debug mode which reports when files are added, changed or unlinked.
--source provide match and ignore patterns for source files. The watcher will rerun tests when these source files change.
Watcher now uses an external package to control which directories it ignores by default.
Match any directory pattern, not just pattern for the first directory in the path.
Force pushed some updates:
Any comments on these points?
|
I think you've got a solid public API in place. I think we should merge and open am issue regarding cleaning up the internals |
👍 |
✨Amazing work on this @novemberborn. It turned out really good! 🍻 |
@novemberborn @jamestalmage Can either of you open issues for follow-up things to discuss? |
@sindresorhus It would be interesting to add it in the doc (readme.md) ? |
@forresst Yes, but undocumented for now as we want to make sure it's good before we get a gadzillion support issues for it. |
💥 ! |
Full list of follow-up issues:
(Feel free to open more for any issues I missed) |
This is just fantastic work @novemberborn! Well done Sir! |
Implement --watch
This PR implements
--watch
#70. Yay!--watch
and-w
rs\n
to rerun all tests. This is useful when you made a change that was not picked up.There's a fair amount of changes and some unresolved issues, apologies for the length!
Tests files can now be run more than once. This means the loggers (reporters) and the API have to be reset between runs. The
mini
reporter retains state, as does the API.The API modified the
files
list when running. It now acts on a copy of the list. The default include and exclude patterns are now exposed on the API instance so the watcher can use them.Incidentally these changes should make it possible for an 3rd-party API consumer to rerun tests without reinstantiating the API.
The new CLI flags are left undocumented for now.
Errors that occur during the initial run cause AVA to exit. Errors that occur during subsequent runs are logged but AVA does not exit. This is good as AVA picks up new test files as they are created (and might be empty). A case could be made though that
--watch
should simply never exit (unlesschokidar
is missing).--watch
logs an error and exits ifchokidar
is missing, like babel/babel#3307.There is some debouncing to avoid rerunning tests prematurely. The delay is 10 milliseconds. If further changes are detected the watcher again delays by 10 milliseconds. Currently this delay is not configurable. We should probably keep it that way unless it becomes a problem.
The watcher waits for the previous test run to finish. Even though AVA supports parallel tests, the API does not support parallel test runs.
Before, when running a single test file, the test titles would not be prefixed with the file name. Now, when the watcher reruns specific tests, the test titles are always prefixed. The log should disambiguate between not just the currently running tests but all tests that have run previously. Note though that in the current implementation, when the all tests are rerun the watcher doesn't force the prefix. This causes an inconsistency in case there is only a single test file. Maybe that should be changed.
The title prefix change uncovered a bug in
commondir
. It returns the full file path if called with only a single file. I've added a commit with a workaround but then swapped it forcomomn-path-prefix
which handles this case fine.chokidar
can be configured with include and exclude patterns. I've defaulted these topackage.json,**/*.js
and.git,node_modules,bower_components,.sass-cache
respectively. The latter pattern is used bynodemon
. We should expand on it a bit and put it in a separate package which can then also be used bynodemon
. Matchingpackage.json
should allow tests to rerun when dependencies are changed.I believe that the exclude patterns should be made configurable. E.g.
source-map-fixtures
has a test which writes to disk. When testing this branch it caused the tests to be rerun continuously. The same goes for the include pattern. Right now it assumes your source files end in*.js
.Putting patterns in a CLI flag is tricky though given that the
ava
invocation is supposed to end with a pattern. Perhaps we could use--watch-pattern=""
for this case. The pattern should behave the same as the files pattern.Similarly we could bikeshed over the input command that causes all tests to be rerun. I used
rs
which I'm used to fromnodemon
. (And we could quibble over the feature itself as well, of course 😄)One final observation: the API resolves tests recursively if the pattern matched a directory. This makes it harder to verify whether a changed file actually matches the files pattern. Perhaps we can require patterns to explicitly match files inside directories? I.e. change
test
totest/**/*.js
.I still need to update/add tests.
Also it'd be pretty cool to track which files tests are depending on, so if one of those files are changed only the applicable tests are rerun. Perhaps after this lands 😉
Thanks for reading this far.