Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync 1.24 sec #292

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 0 additions & 7 deletions .azure-pipelines/pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -415,13 +415,6 @@ stages:
GCS_ARTIFACT_BUCKET: $(GcsArtifactBucket)
condition: eq(variables['isMain'], 'true')

- task: InstallSSHKey@0
inputs:
hostName: "github.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCj7ndNxQowgcQnjshcLrqPEiiphnt+VTTvDP6mHBL9j1aNUkY4Ue1gvwnGLVlOhGeYrnZaMgRK6+PKCUXaDbC7qtbW8gIkhL7aGCsOr/C56SJMy/BCZfxd1nWzAOxSDPgVsmerOBYfNqltV9/hWCqBywINIR+5dIg6JTJ72pcEpEjcYgXkE2YEFXV1JHnsKgbLWNlhScqb2UmyRkQyytRLtL+38TGxkxCflmO+5Z8CSSNY7GidjMIZ7Q4zMjA2n1nGrlTDkzwDCsw+wqFPGQA179cnfGWOWRVruj16z6XyvxvjJwbz0wQZ75XK5tKSb7FNyeIEs4TT4jk+S4dhPeAUC5y+bDYirYgM4GC7uEnztnZyaVWQ7B381AK4Qdrwt51ZqExKbQpTUNn+EjqoTwvqNj4kqx5QUCI0ThS/YkOxJCXmPUWZbhjpCg56i+2aB6CmK2JGhn57K5mj0MNdBXA4/WnwH6XoPWJzK5Nyu2zB3nAZp+S5hpQs+p1vN1/wsjk="
sshPublicKey: "$(DocsPublicKey)"
sshPassphrase: "$(SshDeployKeyPassphrase)"
sshKeySecureFile: "$(DocsPrivateKey)"

- script: docs/publish.sh
displayName: "Publish to GitHub"
workingDirectory: $(Build.SourcesDirectory)
Expand Down
2 changes: 1 addition & 1 deletion VERSION.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.24.11-dev
1.24.11
33 changes: 16 additions & 17 deletions changelogs/current.yaml
Original file line number Diff line number Diff line change
@@ -1,20 +1,19 @@
date: Pending
date: October 10, 2023

behavior_changes:
# *Changes that are expected to cause an incompatibility if applicable; deployment changes are likely required*

minor_behavior_changes:
# *Changes that may cause incompatibilities for some users, but should not for most*

bug_fixes:
# *Changes expected to improve the state of the world and are unlikely to have negative effects*
area: oauth2
- area: http
change: |
fixed a bug when passthrough header was matched, envoy would always remove the authorization header. This behavioral change can be temporarily reverted by setting runtime guard ``envoy.reloadable_features.oauth_header_passthrough_fix`` to false.

removed_config_or_runtime:
# *Normally occurs at the end of the* :ref:`deprecation period <deprecated>`

new_features:

deprecated:
Close HTTP/2 and HTTP/3 connections that prematurely reset streams. The runtime key
``overload.premature_reset_min_stream_lifetime_seconds`` determines the interval where received stream
reset is considered premature (with 1 second default). The runtime key ``overload.premature_reset_total_stream_count``,
with the default value of 500, determines the number of requests received from a connection before the check for premature
resets is applied. The connection is disconnected if more than 50% of resets are premature.
Setting the runtime key ``envoy.restart_features.send_goaway_for_premature_rst_streams`` to ``false`` completely disables
this check.
- area: http
change: |
Add runtime flag ``http.max_requests_per_io_cycle`` for setting the limit on the number of HTTP requests processed
from a single connection in a single I/O cycle. Requests over this limit are processed in subsequent I/O cycles. This
mitigates CPU starvation by connections that simultaneously send high number of requests by allowing requests from other
connections to make progress. This runtime value can be set to 1 in the presence of abusive HTTP/2 or HTTP/3 connections.
By default this limit is disabled.
Binary file modified docs/inventories/v1.24/objects.inv
Binary file not shown.
36 changes: 0 additions & 36 deletions docs/publish.sh
Original file line number Diff line number Diff line change
Expand Up @@ -11,13 +11,10 @@

set -e

DOCS_DIR=generated/docs
CHECKOUT_DIR=envoy-docs
BUILD_SHA=$(git rev-parse HEAD)

VERSION="$(cat VERSION.txt)"
MAIN_BRANCH="refs/heads/main"
RELEASE_BRANCH_REGEX="^refs/heads/release/v.*"
DEV_VERSION_REGEX="-dev$"

if [[ "$VERSION" =~ $DEV_VERSION_REGEX ]]; then
Expand All @@ -29,37 +26,4 @@ if [[ "$VERSION" =~ $DEV_VERSION_REGEX ]]; then
else
echo "Ignoring docs push"
fi
exit 0
else
PUBLISH_DIR="${CHECKOUT_DIR}/docs/envoy/v${VERSION}"
fi

if [[ "${AZP_BRANCH}" != "${MAIN_BRANCH}" ]] && ! [[ "${AZP_BRANCH}" =~ ${RELEASE_BRANCH_REGEX} ]]; then
# Most likely a tag, do nothing.
echo 'Ignoring non-release branch for docs push.'
exit 0
fi

DOCS_MAIN_BRANCH="main"

echo 'cloning'
git clone [email protected]:envoyproxy/envoy-website "${CHECKOUT_DIR}" -b "${DOCS_MAIN_BRANCH}" --depth 1

if [[ -e "$PUBLISH_DIR" ]]; then
# Defense against the unexpected.
echo 'Docs version already exists, not continuing!.'
exit 1
fi

mkdir -p "$PUBLISH_DIR"
cp -r "$DOCS_DIR"/* "$PUBLISH_DIR"
cd "${CHECKOUT_DIR}"

git config user.name "envoy-docs(Azure Pipelines)"
git config user.email [email protected]

set -x

git add .
git commit -m "docs envoy@$BUILD_SHA"
git push origin "${DOCS_MAIN_BRANCH}"
2 changes: 1 addition & 1 deletion docs/versions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@
"1.21": 1.21.6
"1.22": 1.22.11
"1.23": 1.23.12
"1.24": 1.24.9
"1.24": 1.24.10
1 change: 1 addition & 0 deletions source/common/http/conn_manager_config.h
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ namespace Http {
COUNTER(downstream_rq_rejected_via_ip_detection) \
COUNTER(downstream_rq_response_before_rq_complete) \
COUNTER(downstream_rq_rx_reset) \
COUNTER(downstream_rq_too_many_premature_resets) \
COUNTER(downstream_rq_timeout) \
COUNTER(downstream_rq_header_timeout) \
COUNTER(downstream_rq_too_large) \
Expand Down
157 changes: 152 additions & 5 deletions source/common/http/conn_manager_impl.cc
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
#include "source/common/http/conn_manager_impl.h"

#include <chrono>
#include <cstdint>
#include <functional>
#include <list>
Expand Down Expand Up @@ -54,6 +55,15 @@
namespace Envoy {
namespace Http {

const absl::string_view ConnectionManagerImpl::PrematureResetTotalStreamCountKey =
"overload.premature_reset_total_stream_count";
const absl::string_view ConnectionManagerImpl::PrematureResetMinStreamLifetimeSecondsKey =
"overload.premature_reset_min_stream_lifetime_seconds";
// Runtime key for maximum number of requests that can be processed from a single connection per
// I/O cycle. Requests over this limit are deferred until the next I/O cycle.
const absl::string_view ConnectionManagerImpl::MaxRequestsPerIoCycle =
"http.max_requests_per_io_cycle";

bool requestWasConnect(const RequestHeaderMapPtr& headers, Protocol protocol) {
if (!headers) {
return false;
Expand Down Expand Up @@ -106,7 +116,9 @@ ConnectionManagerImpl::ConnectionManagerImpl(ConnectionManagerConfig& config,
time_source_(time_source), proxy_name_(StreamInfo::ProxyStatusUtils::makeProxyName(
/*node_id=*/local_info_.node().id(),
/*server_name=*/config_.serverName(),
/*proxy_status_config=*/config_.proxyStatusConfig())) {}
/*proxy_status_config=*/config_.proxyStatusConfig())),
max_requests_during_dispatch_(runtime_.snapshot().getInteger(
ConnectionManagerImpl::MaxRequestsPerIoCycle, UINT32_MAX)) {}

const ResponseHeaderMap& ConnectionManagerImpl::continueHeader() {
static const auto headers = createHeaderMap<ResponseHeaderMapImpl>(
Expand All @@ -116,6 +128,12 @@ const ResponseHeaderMap& ConnectionManagerImpl::continueHeader() {

void ConnectionManagerImpl::initializeReadFilterCallbacks(Network::ReadFilterCallbacks& callbacks) {
read_callbacks_ = &callbacks;
if (max_requests_during_dispatch_ != UINT32_MAX) {
deferred_request_processing_callback_ =
callbacks.connection().dispatcher().createSchedulableCallback(
[this]() -> void { onDeferredRequestProcessing(); });
}

stats_.named_.downstream_cx_total_.inc();
stats_.named_.downstream_cx_active_.inc();
if (read_callbacks_->connection().ssl()) {
Expand Down Expand Up @@ -262,6 +280,10 @@ void ConnectionManagerImpl::doEndStream(ActiveStream& stream, bool check_for_def
}

void ConnectionManagerImpl::doDeferredStreamDestroy(ActiveStream& stream) {
++closed_non_internally_destroyed_requests_;
if (isPrematureRstStream(stream)) {
++number_premature_stream_resets_;
}
if (stream.max_stream_duration_timer_) {
stream.max_stream_duration_timer_->disableTimer();
stream.max_stream_duration_timer_ = nullptr;
Expand Down Expand Up @@ -294,6 +316,7 @@ void ConnectionManagerImpl::doDeferredStreamDestroy(ActiveStream& stream) {
if (connection_idle_timer_ && streams_.empty()) {
connection_idle_timer_->enableTimer(config_.idleTimeout().value());
}
maybeDrainDueToPrematureResets();
}

RequestDecoder& ConnectionManagerImpl::newStream(ResponseEncoder& response_encoder,
Expand Down Expand Up @@ -380,6 +403,7 @@ void ConnectionManagerImpl::createCodec(Buffer::Instance& data) {
}

Network::FilterStatus ConnectionManagerImpl::onData(Buffer::Instance& data, bool) {
requests_during_dispatch_count_ = 0;
if (!codec_) {
// Http3 codec should have been instantiated by now.
createCodec(data);
Expand Down Expand Up @@ -534,6 +558,59 @@ void ConnectionManagerImpl::doConnectionClose(
}
}

bool ConnectionManagerImpl::isPrematureRstStream(const ActiveStream& stream) const {
// Check if the request was prematurely reset, by comparing its lifetime to the configured
// threshold.
MonotonicTime current_time = time_source_.monotonicTime();
MonotonicTime request_start_time = stream.filter_manager_.streamInfo().startTimeMonotonic();
std::chrono::nanoseconds duration =
std::chrono::duration_cast<std::chrono::nanoseconds>(current_time - request_start_time);

// Check if request lifetime is longer than the premature reset threshold.
if (duration.count() > 0) {
const uint64_t lifetime = std::chrono::duration_cast<std::chrono::seconds>(duration).count();
const uint64_t min_lifetime = runtime_.snapshot().getInteger(
ConnectionManagerImpl::PrematureResetMinStreamLifetimeSecondsKey, 1);
if (lifetime > min_lifetime) {
return false;
}
}

// If request has completed before configured threshold, also check if the Envoy proxied the
// response from the upstream. Requests without the response status were reset.
// TODO(RyanTheOptimist): Possibly support half_closed_local instead.
return !stream.filter_manager_.streamInfo().responseCode();
}

// Sends a GOAWAY if too many streams have been reset prematurely on this
// connection.
void ConnectionManagerImpl::maybeDrainDueToPrematureResets() {
if (!Runtime::runtimeFeatureEnabled(
"envoy.restart_features.send_goaway_for_premature_rst_streams") ||
closed_non_internally_destroyed_requests_ == 0) {
return;
}

const uint64_t limit =
runtime_.snapshot().getInteger(ConnectionManagerImpl::PrematureResetTotalStreamCountKey, 500);

if (closed_non_internally_destroyed_requests_ < limit) {
return;
}

if (static_cast<double>(number_premature_stream_resets_) /
closed_non_internally_destroyed_requests_ <
.5) {
return;
}

if (drain_state_ == DrainState::NotDraining) {
stats_.named_.downstream_rq_too_many_premature_resets_.inc();
doConnectionClose(Network::ConnectionCloseType::NoFlush, absl::nullopt,
"too_many_premature_resets");
}
}

void ConnectionManagerImpl::onGoAway(GoAwayErrorCode) {
// Currently we do nothing with remote go away frames. In the future we can decide to no longer
// push resources if applicable.
Expand Down Expand Up @@ -1151,7 +1228,12 @@ void ConnectionManagerImpl::ActiveStream::decodeHeaders(RequestHeaderMapPtr&& he
traceRequest();
}

filter_manager_.decodeHeaders(*request_headers_, end_stream);
if (!connection_manager_.shouldDeferRequestProxyingToNextIoCycle()) {
filter_manager_.decodeHeaders(*request_headers_, end_stream);
} else {
state_.deferred_to_next_io_iteration_ = true;
state_.deferred_end_stream_ = end_stream;
}

// Reset it here for both global and overridden cases.
resetIdleTimer();
Expand Down Expand Up @@ -1219,8 +1301,15 @@ void ConnectionManagerImpl::ActiveStream::decodeData(Buffer::Instance& data, boo
connection_manager_.read_callbacks_->connection().dispatcher());
maybeEndDecode(end_stream);
filter_manager_.streamInfo().addBytesReceived(data.length());

filter_manager_.decodeData(data, end_stream);
if (!state_.deferred_to_next_io_iteration_) {
filter_manager_.decodeData(data, end_stream);
} else {
if (!deferred_data_) {
deferred_data_ = std::make_unique<Buffer::OwnedImpl>();
}
deferred_data_->move(data);
state_.deferred_end_stream_ = end_stream;
}
}

void ConnectionManagerImpl::ActiveStream::decodeTrailers(RequestTrailerMapPtr&& trailers) {
Expand All @@ -1231,7 +1320,9 @@ void ConnectionManagerImpl::ActiveStream::decodeTrailers(RequestTrailerMapPtr&&
ASSERT(!request_trailers_);
request_trailers_ = std::move(trailers);
maybeEndDecode(true);
filter_manager_.decodeTrailers(*request_trailers_);
if (!state_.deferred_to_next_io_iteration_) {
filter_manager_.decodeTrailers(*request_trailers_);
}
}

void ConnectionManagerImpl::ActiveStream::decodeMetadata(MetadataMapPtr&& metadata_map) {
Expand Down Expand Up @@ -1831,5 +1922,61 @@ void ConnectionManagerImpl::ActiveStream::resetStream(Http::StreamResetReason, a
connection_manager_.doEndStream(*this);
}

bool ConnectionManagerImpl::ActiveStream::onDeferredRequestProcessing() {
// TODO(yanavlasov): Merge this with the filter manager continueIteration() method
if (!state_.deferred_to_next_io_iteration_) {
return false;
}
state_.deferred_to_next_io_iteration_ = false;
bool end_stream =
state_.deferred_end_stream_ && deferred_data_ == nullptr && request_trailers_ == nullptr;
filter_manager_.decodeHeaders(*request_headers_, end_stream);
if (end_stream) {
return true;
}
if (deferred_data_ != nullptr) {
end_stream = state_.deferred_end_stream_ && request_trailers_ == nullptr;
filter_manager_.decodeData(*deferred_data_, end_stream);
}
if (request_trailers_ != nullptr) {
filter_manager_.decodeTrailers(*request_trailers_);
}
return true;
}

bool ConnectionManagerImpl::shouldDeferRequestProxyingToNextIoCycle() {
// Do not defer this stream if stream deferral is disabled
if (deferred_request_processing_callback_ == nullptr) {
return false;
}
// Defer this stream if there are already deferred streams, so they are not
// processed out of order
if (deferred_request_processing_callback_->enabled()) {
return true;
}
++requests_during_dispatch_count_;
bool defer = requests_during_dispatch_count_ > max_requests_during_dispatch_;
if (defer) {
deferred_request_processing_callback_->scheduleCallbackNextIteration();
}
return defer;
}

void ConnectionManagerImpl::onDeferredRequestProcessing() {
requests_during_dispatch_count_ = 1; // 1 stream is always let through
// Streams are inserted at the head of the list. As such process deferred
// streams at the back of the list first.
for (auto reverse_iter = streams_.rbegin(); reverse_iter != streams_.rend();) {
auto& stream_ptr = *reverse_iter;
// Move the iterator to the next item in case the `onDeferredRequestProcessing` call removes the
// stream from the list.
++reverse_iter;
bool was_deferred = stream_ptr->onDeferredRequestProcessing();
if (was_deferred && shouldDeferRequestProxyingToNextIoCycle()) {
break;
}
}
}

} // namespace Http
} // namespace Envoy
Loading