-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add docs for writing test cases #791
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs a link to the schema for the YAML files, so users know where to look for more relevant docs.
We might also consider replicating some of the content from the proto comments here so this page can provide a little more "how to" vs. requiring the user to read comments for every field in the test suite schema.
In particular, we should re-iterate the fields in the schema that should not be put into the test case -- like the host name and port, TLS details, and even standard/protocol-oriented request and response headers and trailers (only custom headers belong in the test case).
I think it would also be useful to show a (small-ish) example in full, and maybe also describe some of the basics, like the HTTP versions, protocols, compression algorithms, and stream types.
And in the section mentioning how the test can be constrained with the "relevant ..." fields, you might also describe briefly how permutations of each test case will be computed, based on the features supported by the client or server under test and the relevant properties of the test suite.
Though perhaps the above -- a section on test case permutations -- belongs in the page about configuration, since it's most relevant to someone configuring and running the tests for their client or server implementation. I guess ideally it could be mentioned here and then linked over to more information in the other page. So maybe we can come back to it and that breadcrumb once the target doc is written...
docs/authoring_test_cases.md
Outdated
The expected response for a test is auto-generated based on the request details. The conformance runner will determine what the response | ||
should be by the values specified in the test suite and individual test cases. However, you have the ability to explicitly specify your | ||
own expected response directly in the test definition itself. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this could use some clarity on exactly when you need to provide the expected response. It currently sounds like it's up to the user's whim.
I think the advice to give is to not include an explicit expected response unless it is necessary, as it can make the test YAML overly verbose and harder to read (and to write/maintain). And that is not necessary if the expected response is simply re-stating the response definition that appears in the requests. So if the test induces behavior that prevents the server from sending or client from receiving the full response definition (timeouts, cancellations, and exceeding message size limits are good examples of this), it will be necessary to define the expected response explicitly. Otherwise, omit it.
Co-authored-by: Joshua Humphries <[email protected]>
Co-authored-by: Joshua Humphries <[email protected]>
@@ -174,6 +174,18 @@ func (lib *testCaseLibrary) expandSuite(suite *conformancev1.TestSuite, configCa | |||
|
|||
func (lib *testCaseLibrary) expandCases(cfgCase configCase, namePrefix []string, testCases []*conformancev1.TestCase) error { | |||
for _, testCase := range testCases { | |||
if testCase.Request.TestName == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a bit more validation so that we could explicitly state these fields were required. Notably, there was no validation at all that a test name be specified. If it was omitted, the tests ran fine. If the tests happened to fail, the output just didn't show a test name, which probably isn't good. For example:
FAILED: TLS Client Certs/HTTPVersion:3/Protocol:PROTOCOL_GRPC_WEB/Codec:CODEC_TEXT/Compression:COMPRESSION_ZSTD/TLS:true:
expecting 0 response messages but instead got 1
Also for the other validation I added, the tests would run the whole suite but then fail with a cryptic error like:
FAILED: TLS Client Certs/HTTPVersion:3/Protocol:PROTOCOL_GRPC_WEB/Codec:CODEC_TEXT/Compression:COMPRESSION_ZSTD:
service name is not a valid service
I figured this validation was better because it catches it before any tests are actually run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could provide a little bit more context for these errors if you change the loop vars from _, testCase
to i, testCase
:
return fmt.Errorf("test case #%d: test case has no name", i+1)
@@ -441,7 +453,7 @@ func populateExpectedResponse(testCase *conformancev1.TestCase) error { | |||
// message in this situation, where the response data value is some fixed string (such as "no response definition") | |||
// and whose request info will still be present, but we expect it to indicate zero request messages. | |||
if len(testCase.Request.RequestMessages) == 0 { | |||
return nil | |||
return fmt.Errorf("test %s has no request messages specified", testCase.Request.TestName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was actually causing a panic because the expected response was nil and we were bypassing all calculation of expected responses. So line 170 in results.go was blowing up on the nil expected
.
@jhump expanded the docs a bit. lmk what you think. |
@@ -441,6 +453,7 @@ func populateExpectedResponse(testCase *conformancev1.TestCase) error { | |||
// message in this situation, where the response data value is some fixed string (such as "no response definition") | |||
// and whose request info will still be present, but we expect it to indicate zero request messages. | |||
if len(testCase.Request.RequestMessages) == 0 { | |||
testCase.ExpectedResponse = &conformancev1.ClientResponseResult{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was causing a panic if you omitted requestMessages
from a test case. Since ExpectedResponse
was nil here, once we returned, the assert
function in results.go was blowing up on line ~170.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks really good. I left a few more comments, but I think this is close.
@@ -174,6 +174,18 @@ func (lib *testCaseLibrary) expandSuite(suite *conformancev1.TestSuite, configCa | |||
|
|||
func (lib *testCaseLibrary) expandCases(cfgCase configCase, namePrefix []string, testCases []*conformancev1.TestCase) error { | |||
for _, testCase := range testCases { | |||
if testCase.Request.TestName == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could provide a little bit more context for these errors if you change the loop vars from _, testCase
to i, testCase
:
return fmt.Errorf("test case #%d: test case has no name", i+1)
Co-authored-by: Joshua Humphries <[email protected]>
@jhump Updated w/ latest feedback. Lmk what you think. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LG!
This adds some documentation for writing test cases, including: