Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Smart Testing annotations for Affected #199

Closed
lordofthejars opened this issue Sep 27, 2017 · 3 comments · Fixed by #258
Closed

Smart Testing annotations for Affected #199

lordofthejars opened this issue Sep 27, 2017 · 3 comments · Fixed by #258

Comments

@lordofthejars
Copy link
Member

Issue Overview

Currently affected strategy just works with black box testing(or white ones if ShrinkWrap or VertX runner is used). The solution would be to use coverage data as a way to detect these cases.

I am not sure when we are going to be able to research on this and also not sure if it will be possible after all to get all this information.

Then a possible solution might be creating a document annotation where developer can set the classes/regexp of classes that this test uses.

In this way we can either detect classes because they are imported or by analyzing the regexp.

@lordofthejars
Copy link
Member Author

Yesterday I as reading about how to fix the problem of affected tests in white tests (tests that does not import production classes).

I have read two different approaches that two different tools are following, by configuration file or coverage.

In case of configuration file, the developer creates a key value file where the key is the test name and the value is a regexp-like expression where you set which production classes are under tests.

The coverage case is more complicated. The first thing you might notice is that coverage reports (and this applies to JaCoCo reports) does not store which test has originated which coverage, so the only information you have in a report is the coverage of the classes.
So how they fix this problem? What they do is first of all they run the first test, then they get the jacoco report, since it has run only one test class, you know the traffic it has been generated. And then you repeat the process for all tests. Of course this is done automatically.
After all you have the key, value of everything, and you store this pairs into a file. In next runs you just read this key-value generated file.

Of course the advantages of one are the disadvantages of the other. In case of coverage approach you have two problems, the first one is that if you are running your application server (let's say Spring boot), you need to start it with jacoco agent and connect remotely to get the data for each test. This is not a problem, but it requires some manual process. The other problem is what's happen with tests that are run directly on production (for example deployment tests), it is unlikely that you run on production tests with JaCoCo agent. The other problem is that how often do you update this data file? Periodically you need to continue updating it with new classes/tests. But the advantage you have is that if you have agent on place, the rest is automatically calculated.

The advantage of manually creating this file is that it is not intrusive, you don't need to configure any agent, but it has one disadvantage and it is that you need to configure all tests by hand.

My suggestion would be implement both approaches, but instead of creating manually a file with key, regexp, just do it using an annotation because I think it would be easier for developer do it in this way.

Also since my plan would be implement both approaches, I'd start with the annotation one because it is easier to implement and might be a really quick win and nice feature to show.

As next work regarding this would be implementing it so this configuration can be read from a configuration file, and the next next would be generate this configuration file automatically from coverage reports.

@MatousJobanek
Copy link
Contributor

Just a few comments related to the annotation approach:

  • this will require that the test will have necessary dependencies in compile scope, which could be quite complicated in some cases I believe
  • what if some classes are (package) protected? In other words, you cannot reference them...?
  • I'm slightly afraid of the state after some larger refactoring changes. Renames, movement of the tested methods - won't that have the same problem as the coverage approach? that the test will be executed in other cases than it should because the content of the annotation is not up-to-date?
  • I can imagine this approach to be usable for test-cases verifying rest calls - where you put just one rest endpoint. I'm not sure how this will be done in other cases.
  • how many classes can be used as a starting point in the cases you are targeting with this issue? (let's say Spring boot)

@lordofthejars
Copy link
Member Author

lordofthejars commented Oct 9, 2017 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment