-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test Coverage Metrics #57
Comments
TestCoverageEvaluator was created at the very first beginning of the project and not used since. I updated the code a bit to make not throw errors. It is still not working correctly but now shows some (wrong) output when you pass
if this is updated to provide the correct numbers it could probably handle your use case. the lower the metric numbers the fewer cases are actually tested in the input source. This class was more like a hack to generate table 4 in the paper.
It needs some work to get this in a good shape & usable. Let me know if working in this directions covers your goal |
Note that, ideally, the metrics should be identified by doing pattern identification inside the SPARQL queries. |
We're having the rdfunit-junit integration up and running which provides us with a good overview of failing test cases, esp. in conjunction with IDE and/or CI-server. If a test is "red" we can trust something broke.
However, the issue with "green" tests is, that we actually do not know why it is green: could be data that's valid according to the test case or maybe because there's no data to validate at all. The latter fact would decrease significance of that test (at least in the given context).
Furthermore we're missing metrics of how much input-data is actually covered by the test cases. Looking at the TestCoverageEvaluator this seems usable - though we need some elaboration. It's currently not clear what input is expected.
Request:
The text was updated successfully, but these errors were encountered: