Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Combine test cases into single file #6

Open
Luap99 opened this issue Jan 3, 2021 · 2 comments
Open

Combine test cases into single file #6

Luap99 opened this issue Jan 3, 2021 · 2 comments

Comments

@Luap99
Copy link

Luap99 commented Jan 3, 2021

Currently the test cases are defined for each shell. Most of them are duplicates. This makes it unnecessary hard to maintain. I think it would make more sense to store them in a single file and run the same cases for each shell. This would make it clear what cases are failing for each shell.

I think it would make sense to store the test cases in a table like structure. In the podman project we have a function for such tests https://github.com/containers/podman/blob/142b4ac966e12559c534be380093a44d0a1d2959/test/system/helpers.bash#L438-L465
You can see it in use here: https://github.com/containers/podman/blob/142b4ac966e12559c534be380093a44d0a1d2959/test/system/065-cp.bats#L25-L51

I think a good structure could be

Name of the test | test command | expected output | (optional) Name of the shells where the test is expected to fail

And possible more depending on what information the tests require.

@marckhouzam
Copy link
Owner

Currently the test cases are defined for each shell. Most of them are duplicates. This makes it unnecessary hard to maintain. I think it would make more sense to store them in a single file and run the same cases for each shell. This would make it clear what cases are failing for each shell.

This is how I had written the original version of these tests for Helm. However, I found it difficult to handle different test results for different shells and when wanting to conditionally run tests because "if statements" are different in fish than in the other shells.

So this time I chose to split them to give more flexibility at the expense of a higher maintenance burden.

I think it would make sense to store the test cases in a table like structure. In the podman project we have a function for such tests

I'll have a look, maybe it addresses my concerns.

@Luap99
Copy link
Author

Luap99 commented Jan 3, 2021

I was thinking of something like this:
testfile:

static completion | comp | completion | zsh

And in comp-tests.bash this:

while read name command expected shell; do
        _completionTests_verifyCompletion "testprog $command" "$expected"
done < <(parse_table "$(cat testfile)")

$name and $shell can be used to provide more information if the test fails

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants