Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http/2: correctly handle 100-continue #562

Merged
merged 6 commits into from
Mar 16, 2017
Merged

Conversation

mattklein123
Copy link
Member

Previously we did not correctly handle "expect: 100-continue" in the http/2
codec. Because of protocol differences, we handle 100-continue directly in the
codec. This change implements support in the http/2 codec similar to http/1.1
where we immediately send a continue response.

This commit also adds partial support for the client codec for 1xx responses.
The code will raise decodeHeaders(...) twice in the 1xx response case. This
is good enough for testing, but will probably crash higher layers if a client
actually responded this way. Right now we effectively trust clients so this
can be handled in a follow up if necessary.

fixes #561

Previously we did not correctly handle "expect: 100-continue" in the http/2
codec. Because of protocol differences, we handle 100-continue directly in the
codec. This change implements support in the http/2 codec similar to http/1.1
where we immediately send a continue response.

This commit also adds partial support for the client codec for 1xx responses.
The code will raise decodeHeaders(...) twice in the 1xx response case. This
is good enough for testing, but will probably crash higher layers if a client
actually responded this way. Right now we effectively trust clients so this
can be handled in a follow up if necessary.

fixes #561
@mattklein123
Copy link
Member Author

@lyft/network-team @jpinner-lyft @andrewwebber

stream->waiting_for_final_headers_ = false;

// This can only happen in the client case in a response, when we received a 1xx to
// start out with. In this case, raise as headers. nghttp2 message checking guarentees
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo guarantees

@mattklein123
Copy link
Member Author

@htuch could you give this a look also please.

Copy link
Member

@htuch htuch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do any Envoy use cases motivate forwarding the 100-continue to the backend? It seems that in general the original intent of the expect header was to have the proxy forward it and then determine from the backend whether or not to reply 100.

This might be legacy by now, and it certainly would hurt performance to round-trip unconditionally.

const Http::HeaderMapPtr Filter::TOO_MANY_REQUESTS_HEADER{new Http::HeaderMapImpl{
{Http::Headers::get().Status, std::to_string(enumToInt(Code::TooManyRequests))}}};
const std::unique_ptr<const Http::HeaderMap> Filter::TOO_MANY_REQUESTS_HEADER{
new Http::HeaderMapImpl{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Google C++ style guide doesn't like globals like this, since unjoined threads might observe destruction when main() is done. https://google.github.io/styleguide/cppguide.html#Static_and_Global_Variables.

Depending on whether this is something we want in the Envoy coding style (maybe we can assume we always have sane threading behavior and everything joins before the end?), this could be addressed by making this a function that returns the results of a static new* allocation of the object.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do correctly join all threads before exiting main. I'm not a huge fan of the raw pointer approach because AFAIK this will cause the heap leak detector to fail on standard settings. I don't feel super strongly about this, but we do this in other places also. Thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's also the concern about non-deterministic ordering of initialization. Not really a concern for simple local objects like this.

I think that when done sanely (e.g. don't initialize static data that depends on other static data), and given we have proper thread joining on exit, it's probably fine.

It would make sense to document this in https://github.com/lyft/envoy/blob/master/CONTRIBUTING.md though.

Copy link
Member Author

@mattklein123 mattklein123 Mar 14, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, for init, we need to more careful, and we do handle more complex cases in init by having functions that return static variables. I will do a doc update.

@@ -28,6 +32,9 @@ bool Utility::reconstituteCrumbledCookies(const HeaderString& key, const HeaderS

ConnectionImpl::Http2Callbacks ConnectionImpl::http2_callbacks_;
ConnectionImpl::Http2Options ConnectionImpl::http2_options_;
const std::unique_ptr<const Http::HeaderMap> ConnectionImpl::CONTINUE_HEADER{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above (regarding static non-POD).

StreamImpl::buildHeaders(final_headers, *CONTINUE_HEADER);
int rc = nghttp2_submit_headers(session_, 0, stream->stream_id_, nullptr, &final_headers[0],
final_headers.size(), nullptr);
ASSERT(rc == 0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this definitely a failure that can be ignored if not-debug?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on logic flow, this function should never fail, so I made it a debug only assert. We do this in a few other places in this file.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about NGHTTP2_ERR_NOMEM?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

None of the Envoy code is safe for memory allocation failure, so we don't handle similar errors anywhere and assume we will shortly crash if OOM. (This is a design philosophy, we should discuss this outside of the context of this change if there are concerns).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, we can discuss this elsewhere (I do worry about libraries using ENOMEM in non-OOM scenarios, as opposed to tcmalloc). In any case, could you also include this design philosophy in https://github.com/lyft/envoy/blob/master/CONTRIBUTING.md when updating? Thanks.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that is a valid concern and might necessitate some paranoia checking in certain cases which would crash us if we receive ENOMEM, etc. I will open a GH issue to track as well as updating contributing guide.

response_encoder->encodeHeaders(response_headers, true);
}

TEST_P(Http2CodecImplTest, ExpectContinueTrailersResponse) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you cover the case where the non-100 :status is in the HEADERS immediately following the 100 :status?

Copy link
Member Author

@mattklein123 mattklein123 Mar 13, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean exactly? Like if Envoy immediately responds with 404 right after sending 100?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, but thinking about it a bit more, I think you have good coverage of the new code paths in these tests, so it's good as is.

@mattklein123
Copy link
Member Author

Do any Envoy use cases motivate forwarding the 100-continue to the backend? It seems that in general the original intent of the expect header was to have the proxy forward it and then determine from the backend whether or not to reply 100.

This might be legacy by now, and it certainly would hurt performance to round-trip unconditionally.

I don't know of any currently, and we already do direct response for HTTP/1.1. Being able to correctly proxy the 1xx through from server to client would be a giant change. My preference would be to do this and then if we actually find a case that requires 100 continue server support (I really doubt this will happen) we can cover it then.

StreamImpl::buildHeaders(final_headers, *CONTINUE_HEADER);
int rc = nghttp2_submit_headers(session_, 0, stream->stream_id_, nullptr, &final_headers[0],
final_headers.size(), nullptr);
ASSERT(rc == 0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about NGHTTP2_ERR_NOMEM?

case NGHTTP2_HCAT_RESPONSE: {
uint64_t response_code = Http::Utility::getResponseStatus(*stream->headers_);
if (CodeUtility::is1xx(response_code)) {
stream->waiting_for_final_headers_ = true;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest renaming to waiting_for_non_informational_headers_, "informational" is the terminology used in RFCs. Although I notice nghttp2 uses the terminology final/non-final - if you want to stick with that then maybe a comment where this field is defined.


switch (frame->headers.cat) {
case NGHTTP2_HCAT_RESPONSE: {
uint64_t response_code = Http::Utility::getResponseStatus(*stream->headers_);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General preference to const for an intermediate value like this.

// proper flow here.
// TODO(mattklein123): Higher layers don't currently deal with a double decodeHeaders()
// call and will probably crash. We do this for testing the server case. We correctly
// handle the server case. In the future, if needed, we can properly handle 1xx in higher
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not clear what is meant by the server case here - how does the server receive a 100 response, isn't it the one generating the 100 response?


MockStreamDecoder request_decoder;
StreamEncoder* response_encoder;
EXPECT_CALL(server_callbacks_, newStream(_))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest refactoring this encoder/decoder setup into the Http2CodecImplTest class, it seems the boiler plate is now in quite a few test cases.

response_encoder->encodeHeaders(response_headers, true);
}

TEST_P(Http2CodecImplTest, ExpectContinueTrailersResponse) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, but thinking about it a bit more, I think you have good coverage of the new code paths in these tests, so it's good as is.

@@ -77,6 +77,7 @@ TEST_F(OutlierDetectorImplTest, DetectorStaticConfig) {
)EOF";

Json::ObjectPtr custom_config = Json::Factory::LoadFromString(json);
EXPECT_CALL(*interval_timer_, enableTimer(std::chrono::milliseconds(100)));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a related change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a missing EXPECT_CALL from a previous change. I added it here since innocuous. Can move to a separate PR if you prefer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, sneak it in, just curious what was happening.

@mattklein123
Copy link
Member Author

@lyft/network-team @htuch updated

@mattklein123 mattklein123 merged commit 1d5035f into master Mar 16, 2017
@mattklein123 mattklein123 deleted the http2_expect_continue branch March 16, 2017 18:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Segmentation Fault - exit code 139 - http2/codec_impl.cc:290
3 participants