-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
React test suite addition attempt 2 #18
Conversation
Thinking of adding a babel transform so that we can selectively turn on tests. Would it be useful @jquense ? |
|
||
module.exports.clone = function(url, dirname) { | ||
return new Promise((resolve, reject) => { | ||
exec(`git clone --depth 1 ${url} ${dirname}`, error => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'd use the execa
pacakge and fs-extra
for promise apis (easier then rolling our own every time)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I usually avoid third party if requirments are trivial :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why maintain the code tho? That's the advantage to the big ecosystem :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True :)
const babel = require('babel-core'); | ||
const path = require('path'); | ||
|
||
const babelOptions = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't think you need a custom transformer here, the root babelrc will be picked up and will work. as it is jest is only running the files in __tests__
so there isn't a need to filter again in the babel transformer right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually there's no filter. L16 is just left over code. I'll remove it.
@@ -0,0 +1,12 @@ | |||
const { clone } = require('./utils'); | |||
|
|||
const repoURL = '[email protected]:facebook/react.git'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd clone based on tag as well maybe. I don't know that we want to be testing against a master all the time, better to test against the tests for a specific version e.g. 16.2.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm worried it'll slow down the cloning. Just clarifying, whats does the workflow look like again. How often will we be running this script?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd think wed only run it in CI or as one offs probably. I don't think cloning by tag would affect the speed tho (could be wrong tho)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clone just the latest tag?
I would like to know your thoughts on this. |
sounds cool, though I'm not sure i understand what you mean by using a babel transform. Can you a explain a bit what you mean? |
So there about 529 tests (if I've got my filters right) that touch react-dom and most of them fail due to things like lack of eventpluginhub etc which we don't support right now. I thought we would need an explicit way to turning tests from upstream on and off. This I would looking to achieve by writing a babel plugin, filters all |
So in my mind, having the failing tests is probably a helpful metric. I'm sort of thinking of these more like code coverage than a real test suite, i.e. a higher percentage of passed tests is clearly better, and maybe if that drops we should see why, but it might not be a CI blocker for instance. there is a bunch of stuff we'd need to mock or transform still, like warnings. I most cases, the asserts of warnings are going to fail just because we don't throw the same error. The other major case is tests that don't use the public API, (e.g. require event hub) Those should be decreasing regularly as the react folks change them over. |
roots: ['<rootDir>'], | ||
moduleNameMapper, | ||
testRegex: | ||
'test/react-test-suite/packages/react-dom/src/__tests__/.*\\.js$', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
theorectically there are tests in react-shared
as well that are pertinent but i'm not sure it's worth trying to figure out which are.
more to the question, i do think we'll probably want to mark some tests as "skip" or "ignore" manually, (maybe via a json file with test names?) we might be able to use jest's options tho to do that, vs a babel plugin |
Exactly what I have in mind but for the failing part: I would prefer skipped as opposed to failures. I need to ensure tests are we know for sure are supposed to pass don't fail
This I have avoided with the filter that config.build.js does in the upstream repo.
Exactly the whole idea behind the babel plugin. So can we do that without the plugin and just using Jest. I didn't know about this. Hence thought transformation would be required. |
I think this is an admirable goal, i'm a bit concerned tho that managing that will get out of hand pretty fast. with 500+ tests keep track of which should or should not pass is probably not feasible. For stuff that is a "must", it may be more manageable to make sure our own tests touch that functionality. I think trying to get more value out of the react-dom suite beyond, "more or less tests pass/fail", is going to be a fairly time consuming project in it's own right. Perhaps we start with something simpler to start and see what sort of value we get out of it? Like check a current run against the last run with a % of tests failed/passed vs past runs? |
@jquense In that case can you run a couple of tests individually and see if they are failing for the right reasons? What you are asking for is almost done. Or rather I'd like to know what I've missed. |
How do we go about this? I'm guessing custom jest reporters? |
This shouldn't be the case. (If you mean lack of internal modules.) We explicitly mark test cases with internal modules as I'd expect most to fail due to missing warnings. But we'll be adjusting the way we assert the warnings in a way that makes it easier for you to ignore those (facebook/react#11786). |
I'll take a look again and see why else the tests are failing.
I did add them to ignore path and don't think they are running. Let me see if any other test is invoking such internal modules. @gaearon What do you think about the feasibility of the whole idea? How should we be doing this? Fiber rewrite would also have faced a similar situation: Ensure all tests pass for compatibility but most of them fail in the beginning. How was that managed? I remember there were two modes for runnning tests. One with a different set of feature flags turned on I think? |
To be honest for now I’d just do this manually. Pick tests that seem important, copy paste and massage them to work. We can later look if we could unify them better. |
@jquense Shall we revisit this later sometime? We can close it maybe. |
Closing this for now. We can re-open this maybe if we get better ideas. |
Based on comment