Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

React test suite addition attempt 2 #18

Closed
wants to merge 3 commits into from

Conversation

ManasJayanth
Copy link
Collaborator

Based on comment

@ManasJayanth ManasJayanth changed the title React test suite addition attempt 2 [WIP] React test suite addition attempt 2 Dec 26, 2017
@ManasJayanth
Copy link
Collaborator Author

Thinking of adding a babel transform so that we can selectively turn on tests. Would it be useful @jquense ?


module.exports.clone = function(url, dirname) {
return new Promise((resolve, reject) => {
exec(`git clone --depth 1 ${url} ${dirname}`, error => {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'd use the execa pacakge and fs-extra for promise apis (easier then rolling our own every time)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I usually avoid third party if requirments are trivial :)

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why maintain the code tho? That's the advantage to the big ecosystem :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True :)

const babel = require('babel-core');
const path = require('path');

const babelOptions = {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think you need a custom transformer here, the root babelrc will be picked up and will work. as it is jest is only running the files in __tests__ so there isn't a need to filter again in the babel transformer right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually there's no filter. L16 is just left over code. I'll remove it.

@@ -0,0 +1,12 @@
const { clone } = require('./utils');

const repoURL = '[email protected]:facebook/react.git';
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd clone based on tag as well maybe. I don't know that we want to be testing against a master all the time, better to test against the tests for a specific version e.g. 16.2.0

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm worried it'll slow down the cloning. Just clarifying, whats does the workflow look like again. How often will we be running this script?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd think wed only run it in CI or as one offs probably. I don't think cloning by tag would affect the speed tho (could be wrong tho)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clone just the latest tag?

@ManasJayanth
Copy link
Collaborator Author

Thinking of adding a babel transform so that we can selectively turn on tests. Would it be useful @jquense ?

I would like to know your thoughts on this.

@jquense
Copy link
Owner

jquense commented Jan 1, 2018

sounds cool, though I'm not sure i understand what you mean by using a babel transform. Can you a explain a bit what you mean?

@ManasJayanth
Copy link
Collaborator Author

So there about 529 tests (if I've got my filters right) that touch react-dom and most of them fail due to things like lack of eventpluginhub etc which we don't support right now. I thought we would need an explicit way to turning tests from upstream on and off.

This I would looking to achieve by writing a babel plugin, filters all it and describe calls based on a list of tests that we decided to run (this list could the a list of strings that test's have in the test description' and transform the rest into xit and xdescribe (or maybe remove them altogether. we can take a call on this) so that they get skipped and test results show what percentage of compatibility we have with the upstream.

@jquense
Copy link
Owner

jquense commented Jan 1, 2018

So in my mind, having the failing tests is probably a helpful metric. I'm sort of thinking of these more like code coverage than a real test suite, i.e. a higher percentage of passed tests is clearly better, and maybe if that drops we should see why, but it might not be a CI blocker for instance.

there is a bunch of stuff we'd need to mock or transform still, like warnings. I most cases, the asserts of warnings are going to fail just because we don't throw the same error. The other major case is tests that don't use the public API, (e.g. require event hub) Those should be decreasing regularly as the react folks change them over.

roots: ['<rootDir>'],
moduleNameMapper,
testRegex:
'test/react-test-suite/packages/react-dom/src/__tests__/.*\\.js$',
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

theorectically there are tests in react-shared as well that are pertinent but i'm not sure it's worth trying to figure out which are.

@jquense
Copy link
Owner

jquense commented Jan 1, 2018

more to the question, i do think we'll probably want to mark some tests as "skip" or "ignore" manually, (maybe via a json file with test names?) we might be able to use jest's options tho to do that, vs a babel plugin

@ManasJayanth
Copy link
Collaborator Author

So in my mind, having the failing tests is probably a helpful metric. I'm sort of thinking of these more like code coverage than a real test suite, i.e. a higher percentage of passed tests is clearly better, and maybe if that drops we should see why, but it might not be a CI blocker for instance.

Exactly what I have in mind but for the failing part: I would prefer skipped as opposed to failures. I need to ensure tests are we know for sure are supposed to pass don't fail

The other major case is tests that don't use the public API, (e.g. require event hub) Those should be decreasing regularly as the react folks change them over.

This I have avoided with the filter that config.build.js does in the upstream repo.

more to the question, i do think we'll probably want to mark some tests as "skip" or "ignore" manually, (maybe via a json file with test names?) we might be able to use jest's options tho to do that

Exactly the whole idea behind the babel plugin. So can we do that without the plugin and just using Jest. I didn't know about this. Hence thought transformation would be required.

@jquense
Copy link
Owner

jquense commented Jan 2, 2018

I need to ensure tests are we know for sure are supposed to pass don't fail

I think this is an admirable goal, i'm a bit concerned tho that managing that will get out of hand pretty fast. with 500+ tests keep track of which should or should not pass is probably not feasible. For stuff that is a "must", it may be more manageable to make sure our own tests touch that functionality. I think trying to get more value out of the react-dom suite beyond, "more or less tests pass/fail", is going to be a fairly time consuming project in it's own right. Perhaps we start with something simpler to start and see what sort of value we get out of it? Like check a current run against the last run with a % of tests failed/passed vs past runs?

@ManasJayanth
Copy link
Collaborator Author

ManasJayanth commented Jan 2, 2018

@jquense In that case can you run a couple of tests individually and see if they are failing for the right reasons? What you are asking for is almost done. Or rather I'd like to know what I've missed.

@ManasJayanth ManasJayanth changed the title [WIP] React test suite addition attempt 2 React test suite addition attempt 2 Jan 2, 2018
@ManasJayanth
Copy link
Collaborator Author

ManasJayanth commented Jan 2, 2018

Like check a current run against the last run with a % of tests failed/passed vs past runs?

How do we go about this? I'm guessing custom jest reporters?

@gaearon
Copy link

gaearon commented Jan 2, 2018

So there about 529 tests (if I've got my filters right) that touch react-dom and most of them fail due to things like lack of eventpluginhub etc which we don't support right now.

This shouldn't be the case. (If you mean lack of internal modules.)

We explicitly mark test cases with internal modules as -test.internal.js. You shouldn't try to run these. The rest should run against public APIs.

I'd expect most to fail due to missing warnings. But we'll be adjusting the way we assert the warnings in a way that makes it easier for you to ignore those (facebook/react#11786).

@ManasJayanth
Copy link
Collaborator Author

This shouldn't be the case. (If you mean lack of internal modules.)

I'll take a look again and see why else the tests are failing.

We explicitly mark test cases with internal modules as -test.internal.js. You shouldn't try to run these. The rest should run against public APIs.

I did add them to ignore path and don't think they are running. Let me see if any other test is invoking such internal modules.

@gaearon What do you think about the feasibility of the whole idea? How should we be doing this? Fiber rewrite would also have faced a similar situation: Ensure all tests pass for compatibility but most of them fail in the beginning. How was that managed? I remember there were two modes for runnning tests. One with a different set of feature flags turned on I think?

@gaearon
Copy link

gaearon commented Jan 3, 2018

To be honest for now I’d just do this manually. Pick tests that seem important, copy paste and massage them to work. We can later look if we could unify them better.

@ManasJayanth
Copy link
Collaborator Author

@jquense Shall we revisit this later sometime? We can close it maybe.

@ManasJayanth
Copy link
Collaborator Author

Closing this for now. We can re-open this maybe if we get better ideas.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants