-
-
Notifications
You must be signed in to change notification settings - Fork 477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parameterized tests get executed when no testdata is present. #826
Comments
@Remco-74 That is how it is, function ItWithMandtoryTestCases {
[CmdletBinding(DefaultParameterSetName = 'Normal')]
param(
[Parameter(Mandatory = $true, Position = 0)]
[string]$name,
[Parameter(Position = 1)]
[ScriptBlock] $test = {},
[System.Collections.IDictionary[]] $TestCases,
[Parameter(ParameterSetName = 'Pending')]
[Switch] $Pending,
[Parameter(ParameterSetName = 'Skip')]
[Alias('Ignore')]
[Switch] $Skip
)
if ($null -eq $TestCases -or $TestCases.Count -lt 1)
{
throw "Cannot run test, at least one test case is required."
}
It @PSBoundParameters
}
Describe 'Defect: Empty Test Set results in executing tests.' {
BeforeAll {
$testCases = @{}
}
ItWithMandtoryTestCases 'should ignore a parameterized test when no test data is given' -TestCases $testCases {
param($testCase)
$testCase | Should Not Be $null
}
ItWithMandtoryTestCases 'should ignore a parameterized test when test data is $null' -TestCases $null {
param($testCase)
$testCase | Should Not Be $null
}
}
|
@dlwyatt the skip and pending have their own parameter sets, but to do this TestCases should have also it's own parameter set. But then Skip could not be used with TestCases, and when I put TestCases in all three sets, then it cannot resolve the parameter set. Am I just doing it wrong? if not then putting skip and pending in the default set should not be a big problem, and I think skip should win in that case. |
I don't think you want to make this mandatory, based on reading the OP. They just want us to ignore the call to if ($null -ne $TestCases -and $TestCases.Count -gt 0)
{
foreach ($testCase in $TestCases)
{
# ....
}
}
else
{
# ...
} We could change that to this: if ($PSBoundParameters.ContainsKey('TestCases'))
{
if ($null -ne $TestCases -and $TestCases.Count -gt 0)
{
foreach ($testCase in $TestCases)
{
# ....
}
}
}
else
{
# ...
} That way if someone explicitly says (for example) |
Seems reasonable to me. Right now it behaves as though you didn't use |
@dlwyatt re-read OP and I think you are corect that they do want to skip the Personally I don't think that's the correct default for Pester. In unit testing scenario: Once you provided the TestCases parameter, you shown intent to provide test cases, so This only amplifies when you use Pester for environment validation. There you in most cases don't want your test suite to succeed because it lacks data. You want to be warned that the list of backups you are validating is empty, For the same reason I recommend to put checks in foreaches when putting them around It blocks, you may end up generating no tests, and no tests means passing your test suite. On the other hand sometimes you might want to do just that and generate no tests when there are no test cases. All in all I willl re-move this from the 4.0 milestone, and ask community what they think is the most useful behavior. |
I would expect the 'It' block to not run when -Testcases was specified but it's value is empty. 'It' should run for every testcase provided, so when no testcase is given, 'It' does not need to run. The way is it now, you will need to test the parameters for $null values inside every test, wich gets old fast. Also, how would one determine that a test record has actual $null values, instead of an empty testcase? Testing for completenes (to validate that testcases are present when expected) is a test in itself, It should not be the responsabillity of every single test to test that. The case where -TestCases is provided, but given a $null value (instead of an empty collection) may be diffrent, as it may indicate an error in collecting testdata, or incorrect usage of the -TestCase paramerer. |
I understand, but I don't think it is the correct default, because it is a dangerous default. Test cases are a convenience thing that allows you to move from multiple separate tests, that only differ by data, to one test with multiple test cases. You move from 1 test, to 1 test with 1 testcase, you move from 2 tests to 1 test with 2 test cases etc., but you don't move from 0 tests to 1 test with 0 test cases. You could write such parametrized test, but it would be pointless. Without any examples the test does not influence your production code in any way, and so it does not matter if the test exists or not. What is more likely if you got 1 test with 0 test cases is that you made a mistake somewhere, and failed to provide the data, and this should be highlighted by failing your test. That is what happens now in not so obvious way, because nulls are provided instead of data. And I think it should continue to happen, but with an error message that clearly states that the framework failed the test because there were no test cases. Up until here I was talking only about unit testing, but I realize that people use pester for validating environments as well, and there you don't know what input data you get, and it might make sense to skip tests when you get no test cases. I would still keep the default, because it's safer to fail when you made a mistake, and failed to load the list of users blacklisted from remotely logging into computers. But I would add a switch that would allow 0 test cases explicitly. This is one of the places where the needs of unit testing and environment validation differ greately, and where it sucks that we don't have two different frameworks for that. |
This is working as inteded, I still think the current default is how it should behave. |
When using parameterized tests, the tests get executed (with missing parameter data) when the testcases are empty or $null
Results in:
This is very inconvenient for us. We like to create data drive tests, using a csv file or database table. We do not want to have to add null checks in every test, nor do we want the tests to fail unexpectedly when no test data is given.
The text was updated successfully, but these errors were encountered: