-
-
Notifications
You must be signed in to change notification settings - Fork 425
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run linters with configurable concurrency #149
Conversation
I've found SamVerschueren/listr#54 related to this. Should we instead submit a PR to it and use it here? |
Depends on whether we want to expose the chunked execution to the user. Might be more noisy to see the task being executed in multiple chunks. I would prefer if the library hid that from the user. But I am okay with it either way. Is there any advantage of using listr for concurrency? Also, there is already an open PR for this - SamVerschueren/listr#56 |
Looks like |
@sudo-suhas cool! Does this mean we can leverage it and simplify the code of this package? |
That depends on a couple of things
One things is certain, in order to not hit the OS limitation on windows, we definitely have to chunk the files. So the only changes, if any, would be in |
|
@okonet I'll try to clear up the terminology so that we are both on the same page. {
"lint-staged": {
"src/**/*.js": [
"eslint --fix",
"git add"
],
"test/**/*.js": [
"eslint --fix",
"git add"
]
}
} So, like you said, So why does this PR exist? Let us say in the glob pattern I hope I got that across. If you want me to remove the |
I got it! Thanks for the detailed explanation. Would it be possible to have a comparison of using listr options and this PR console outputs? Or can you explain what is the main difference would be between doing this internally (this PR) or using Listr's functionality? |
I have already included the console output the PR currently outputs in the worst case(errors in multiple chunks). It is at the bottom of the PR description in a collapsed section. Note that if there are no errors, the console output will be exactly the same as the current master version. I'll try to do the changes for using listr to see how that looks as well. |
@okonet I am finding it difficult to implement this with Console output
If I do delegate, then I can't print the success, error messages correctly. code'use strict'
const execa = require('execa')
const Listr = require('listr')
const findBin = require('./findBin')
module.exports = function runScript(commands, pathsToLint, packageJson, options) {
const concurrent = options && options.subTaskConcurrency
? options.subTaskConcurrency
: 4
const renderer = options && options.renderer
? options.renderer
: 'update'
const lintersArray = Array.isArray(commands) ? commands : [commands]
return lintersArray.map(linter => ({
title: linter,
task: () => {
try {
const tasks = findBin(linter, pathsToLint, packageJson, options)
.map((res, idx) => ({
title: `Chunk ${ idx }`,
task: () => {
const execaOptions = res.bin !== 'npm' && options && options.gitDir
? { cwd: options.gitDir } : {}
return execa(res.bin, res.args, execaOptions)
}
}))
return new Listr(tasks, {
concurrent,
renderer,
exitOnError: false
})
} catch (err) {
throw err
}
}
}))
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My only concern (besides the comments I left) is that findBin
looks rather very complex now. Do you think it would make more sense to refactor it in a way it only returns the bin and have the second module that is concerned about arguments?
src/index.js
Outdated
@@ -27,10 +27,16 @@ cosmiconfig('lint-staged', { | |||
// result.config is the parsed configuration object | |||
// result.filepath is the path to the config file that was found | |||
const config = result.config | |||
|
|||
const readConfigOption = (key, defaultValue) => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the approach but it would be great this to be in a separate file and have tests. Then it can also be reused here: https://github.com/okonet/lint-staged/pull/149/files#diff-f7efc3e72253568bae00aa123c61fd95R41
src/runScript.js
Outdated
|
||
module.exports = function runScript(commands, pathsToLint, packageJson, options) { | ||
const concurrency = options && options.subTaskConcurrency |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is subTaskConcurrency
is 0
it will use fallback value 4
I believe. Please use the readConfigOption
function here as well.
@sudo-suhas okay let's go with your solution but let's clean up a bit. |
@okonet I am doing all the changes you requested(including Regarding the |
I'm not sure about the maximum chunk size. I'd leave it to you :) |
@okonet If you can help me a little on how to fix the failing tests, we are just about done. |
Hmm, I honestly don't know how you should change the test. It seems it has to do with |
The problem is due to the mocks. When I run the tests, I get the following error message multiple times:
This is originating from this line. Which means that the mocked execa does not return a Promise on being called. I tried looking at the jest docs but am unable to figure out how this should be fixed. Another problem is that the test is called before the Promise returned by pMap is. I tried waiting for the promise from No longer relevant // apologies for the dirty code
it('should work with multiple commands', async () => {
const res = runScript(['test', 'test2'], ['test.js'], packageJSON)
expect(res.length).toBe(2)
expect(res[0].title).toBe('test')
expect(res[1].title).toBe('test2')
expect(res[0].task()).toBeAPromise()
try { await res[0].task() } catch (ignore) {}
expect(mockFn.mock.calls.length).toEqual(1)
expect(mockFn.mock.calls[0]).toEqual(
['npm', ['run', '--silent', 'test', '--', 'test.js'], {}]
)
expect(res[1].task()).toBeAPromise()
try { await res[0].task() } catch (ignore) {}
expect(mockFn.mock.calls.length).toEqual(2)
expect(mockFn.mock.calls[1]).toEqual(
['npm', ['run', '--silent', 'test2', '--', 'test.js'], {}]
)
})
If I log the contents of
EDIT: So now the tests are not failing. But the mock needs to be fixed Then I can remove the ugly hack for ignoring the error. |
Codecov Report
@@ Coverage Diff @@
## master #149 +/- ##
=========================================
+ Coverage 89.74% 100% +10.25%
=========================================
Files 3 5 +2
Lines 39 65 +26
Branches 5 8 +3
=========================================
+ Hits 35 65 +30
+ Misses 4 0 -4
Continue to review full report at Codecov.
|
Had to {
"presets": [
["env", {
"targets": {
"node": "current"
}
}]
],
"plugins": [
["transform-runtime", {
"helpers": false,
"polyfill": false,
"regenerator": true
}]
]
}
or {
"presets": ["es2015", "stage-0"],
"plugins": [
["transform-runtime", {
"helpers": false,
"polyfill": false,
"regenerator": true
}]
]
}
|
I've just merged #167 that should actually fix it AFAIK. Can you try to rebase on master and see if it works? |
I'm also doing a pretty big refactoring to be able to add more test coverage (using Jest snapshots for the stdout): #141. Do you think it would make sense to merge mine and resolve conflicts in this PR? Or merge yours and leave me with it? ;) |
I did the rebase(not yet pushed) but both problems still exist
npm test ERR output
I am going to defer to your better judgement here. I have only recently started making pull requests and am not entirely sure how we would go about either option. However, I would be more than happy to read up and do the needful like I did for rebase. |
Let's go with a more bullet proof How can I be the help with execa? Here is the mock for it: https://github.com/okonet/lint-staged/blob/master/test/__mocks__/execa.js that is just an empty function. You might want to modify it to return a |
41fdf18
to
74385f9
Compare
@okonet I had tried modifying the mock before. I tried a few things again and here is the list of things which did not help: jest.genMockFromModule('execa')
const execaMock = jest.fn()
// These still fail on .catch
// execaMock.mockImplementation(() => Promise.resolve(true))
// execaMock.mockReturnValue(Promise.resolve(true))
module.exports = execaMock // also fails on .catch
jest.genMockFromModule('execa')
module.exports = jest.fn(() => Promise.resolve(true)) |
74385f9
to
c22f6f5
Compare
package.json - Add lodash.chunk, p-map dependency calcChunkSize Add helper function for bullet proof chunk size calculation Add calcChunkSize.spec.js with tests findBin Modify signature, remove paths from input Return {bin, args}, file paths should be appended before passing to execa Modify tests to match new signature index Use readConfigOption for reading option concurrent Pass config to runScript options for reading subTaskConcurrency, chunkSize readConfigOption Add generic helper function for reading config option Add readConfigOption.spec.js with tests runScript Use p-map to run runnables returned from findBin with configured concurrency Use readConfigOption for reading config, subTaskConcurrency, chunkSize Default chunkSize to Number.MAX_SAFE_INTEGER to minimise number of chunks Default subTaskConcurrency to 2 Use lodash.chunk to generate file path chunks Cache execaOptions and use Object.assign for cloning Combine findBin.args and file paths before passing to execa Update runScript.spec.js to always pass files as an array to `runScript` Setup babel-preset-env + transform-runtime for async in node 4 Closes lint-staged#147 Closes lint-staged#147
@okonet I have rebased again and was able to get the tests and mock working thanks to the changes in the previous PR. Can you please review once more? I would really like to do the final changes(if any) and merge this in. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! It would be great if you could remove the commented line. Otherwise ready to merge! Great work!
test/runScript.spec.js
Outdated
try { | ||
await taskPromise | ||
} catch (err) { | ||
// expect(err).toBeInstanceOf(Error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is just got forgotten here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad..
} | ||
|
||
return pMap(filePathChunks, mapper, { concurrency }) | ||
.catch((err) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't covered with tests. Should we add one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd have to add a mock right?
@sudo-suhas great work! Thanks for taking time and implementing this. |
From the TODO there is still documentation bits that are missing. Could you please add them before I merge? |
Is this ok?
|
@sudo-suhas looks good to me! |
@okonet All done! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing work! Thanks!
@sudo-suhas it looks like the concurrency causing issues when used with I can't remember all the details but can the default |
Changelog
callChunkSize.js
for bullet proof chunk size calculationfindBin
to usecalcChunkSize.js
and return array of runnablesindex.js
subTaskConcurrency
andchunkSize
to be passed torunScript
runScript
calcChunkSize.spec.js
for testsfindBin.spec.js
findBin
{ bin, args }
as returned valuerunScript.spec.js
to always pass files as an array torunScript
Why p-map
Although Listr supports a
concurrency
option, it is just a switch. We cannot control the number of concurrent things we want to run at a time. So I have used p-map for this. Avoids having to include an entire Promise library implementation.TODO
subTaskConcurrency
andchunkSize
subTaskConcurrency
in bothindex.js
as well asrunScript.js
.Similarly for
chunkSize
inindex.js
andcalcChunkSize.js
. Is there a better way?chunkSize
andsubTaskConcurrency
regeneratorRuntime
on node 4Notes
The
pathsToLint
which is passed torunScript
and subsequently tofindBin
should always be an array and the tests have been updated for this. I tested locally when only 1 file was staged and even in that case, it was passed as an array. Is there a situation where it will be passed as a string?Do we need to validate option
subTaskConcurrency
? It is expected to be a number greater than or equal to 1.Since we are splitting the lint task into chunks, if multiple chunks have errors, the error report will be separate. The PR already ensures that we don't terminate on first rejection. But the err report is slightly non ideal. Extreme case example(intentionally simulated):
Error report
Closes #147