Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
488 include
deviceregistration
indevicedeploymentstatus
#489base: develop
Are you sure you want to change the base?
488 include
deviceregistration
indevicedeploymentstatus
#489Changes from 1 commit
a0ebb3e
570ac77
10da0cc
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be
DeviceDeployed
. 🤔 Why does it work? I'm not a big TDD fan, but there is something to seeing a test fail before making it pass.Seems like luck would have it that's exactly where coverage is missing. 😅 Could you add a preceding commit which provides coverage for
DeploymentService.deviceDeployed()
? And, while at it, verify that all the other endpoints of this service are actually called in the current regression tests.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We only have coverage for
CreateStudyDeployment
, if I remember correctly that was the only failing test, as Actual testing of the status responses should already be covered adequately in StudyDeployment tests., but the regression test case are generated fromDeploymentServiceTest
.Do we only want the generated json request files, or we also want those tests in
DeploymentServiceTest
? I think I'll include them first anyways to agree on the test cases, and they can be removed later if not desired.I added it to
1.0
because I felt the test case was needed from the start of this major version, not due to some changes introduced in1.1
. Then simlly forget to add them1.1
😛. As I generated those json files in ver. 1.0 and see a list of diff with origional ones, wandering what I breaked and realized those are the generated UUIDs difference...There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd always keep them in sync, so that it's just a matter of copying the output of all of them over.
But, obviously the comment I left in code there didn't consider the implications on reduced regression testing of the infrastructure layer (JSON) by not covering all potential input/output. 😅 So albeit valid—there is plenty of functional coverage in StudDeploymentTest—, that doesn't cover serialization and JSON schema definitions.
I think it would make sense to write test cases that try to cover as much as possible, rather than writing them as unit tests. These are more or less integration tests after all. So, I'd focus on longer scenarios with multiple calls reflecting fuller study lifetime to get more coverage for the infrastructure layer.
I actually spotted that. 🤔. I thought I injected some more deterministic GUID and timestamp generator for the creation of these tests. I saw it in some test resources, but not others. I don't remember whether that's expected/normal, or oversight.