Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mitigate Host Header cache-poisoning attacks #1398

Merged
merged 2 commits into from
Oct 13, 2017
Merged

Conversation

stevendanna
Copy link
Contributor

Previously, a request such as:

curl -H Host:evil.com http://chef-server.dev

would create a response where the Location was set to the user
provided Host header.

Location: https://evil.com/

If this 301 was erroneously cached, it may then be served to another
user, whose traffic might be redirected to an attacker-controlled
server.

This PR changes the nginx configuration so that we only respond to
requests with Host headers that match the configured FQDNs for the
machine. Any requests for an unknown host will get a 404.

To ease the transition burden, we also add any IPs on the machine and
localhost to the list of known server names.

The main dis-advantage is that it we can't account for IPs that route
to the machine but aren't accessible via Ohai.

This adds two new configuration options

delivery['nginx']['strict_host_header'] = true
delivery['nginx']['use_implicit_hosts'] = true

The old behavior can be restored with

delivery['nginx']['strict_host_header'] = false
delivery['nginx']['use_implicit_hosts'] = false

This is similar to the approach taken in chef/automate#1118 (apologies
private). A much more simple solution would be to replace $host with
$server_name in the rewrite/return rule. The concern with this
approach in Automate was that users hitting the site with an IP
address would get redirected to a hostname, causing consternation.
This might not be as much of a concern here because we only use $host
for the HTTP->HTTPS rewrite, so the user could avoid such
consternation by using HTTPS from the outset.

Signed-off-by: Steven Danna [email protected]

Previously, a request such as:

    curl -H Host:evil.com http://chef-server.dev

would create a response where the Location was set to the user
provided Host header.

    Location: https://evil.com/

If this 301 was erroneously cached, it may then be served to another
user, whose traffic might be redirected to an attacker-controlled
server.

This PR changes the nginx configuration so that we only respond to
requests with Host headers that match the configured FQDNs for the
machine. Any requests for an unknown host will get a 404.

To ease the transition burden, we also add any IPs on the machine and
localhost to the list of known server names.

The main dis-advantage is that it we can't account for IPs that route
to the machine but aren't accessible via Ohai.

This adds two new configuration options

    delivery['nginx']['strict_host_header'] = true
    delivery['nginx']['use_implicit_hosts'] = true

The old behavior can be restored with

    delivery['nginx']['strict_host_header'] = false
    delivery['nginx']['use_implicit_hosts'] = false

This is similar to the approach taken in chef/automate#1118 (apologies
private). A much more simple solution would be to replace $host with
$server_name in the rewrite/return rule. The concern with this
approach in Automate was that users hitting the site with an IP
address would get redirected to a hostname, causing consternation.
This might not be as much of a concern here because we only use $host
for the HTTP->HTTPS rewrite, so the user could avoid such
consternation by using HTTPS from the outset.

Signed-off-by: Steven Danna <[email protected]>
@stevendanna
Copy link
Contributor Author

Alternative: #1397

Copy link
Contributor

@markan markan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This generally looks reasonable. I'm trying to understand if there are any gotchas around users of load balancers in front of chef server frontends that might make us want the defaults for strict_host_header to be false to prevent breakage.

If I understand that use case properly; someone using a LB must have set the server_name to that of the LB; so they won't be broken by requiring this, and we're fine making the default true.


server_name _;
return 404;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor indentation glitches here.

Copy link

@ksubrama ksubrama left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to approve this instead of 1397 because I think I feel more comfortable preserving as much of the original host header provided as possible just to ease the concerns of any existing code blindly matching and parsing against the hostname it just sent a request too. I may also be misunderstanding this but...

yolo

@stevendanna stevendanna merged commit f072bf3 into master Oct 13, 2017
@stevendanna stevendanna deleted the ssd/SUSTAIN-716-2 branch November 27, 2017 10:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants