Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

s3: use pooled transport for http client #2481

Merged
merged 1 commit into from
Mar 29, 2017
Merged

s3: use pooled transport for http client #2481

merged 1 commit into from
Mar 29, 2017

Conversation

vishalnayak
Copy link
Member

No description provided.

@jefferai
Copy link
Member

I'm a little hesitant to merge this right before a release if it doesn't solve the original problem that was reported; on the other hand, we have seen issues with the aws sdk in the past and I can't think of why this would cause a problem. Maybe let's wait to hear back?

@vishalnayak
Copy link
Member Author

@jefferai That sounds reasonable. There is absolutely no rush on this.

@briankassouf
Copy link
Contributor

briankassouf commented Mar 14, 2017

@vishalnayak If we decide to make this change the MaxIdleConnsPerHost variable will need to be upped. It should match how we do it in consul https://github.com/hashicorp/vault/blob/master/physical/consul.go#L164

@vishalnayak
Copy link
Member Author

@briankassouf Ah, yes! I'll do that. We need the parallel loading of leases to work properly :-)

@deverton
Copy link
Contributor

Just curious if this is a perf issue with parallel loads from S3? We've noticed some very slow starts (~50 minutes) with S3 as a storage backend and had started to wonder if there was something broken with the S3 client. #2466 was the start of trying to normalise the S3 backend with other backends for example.

@jefferai jefferai added this to the 0.7.1 milestone Mar 19, 2017
@jefferai
Copy link
Member

Nope -- this doesn't have anything to do with performance and only has to do with ensuring the client is using an isolated Go http client for reasons (see the readme at https://github.com/hashicorp/go-cleanhttp ).

Long startup times are usually due to very high numbers of leases. In 0.6.5+ you can see in the log (might need debug or trace levels, I forget) how many total leases need to be loaded, and it will also display some running progress updates. If your data storage and/or Vault are on AWS, this can be exacerbated by running out of provisioned IOPS even if the storage layer itself is capable of providing leases much more quickly.

If you're having long startup periods due to high numbers of leases, please post on the mailing list and we'll help you figure out how to avoid generating such high numbers. If that's not what's going on, please file a separate issue about it and we can investigate.

@jefferai
Copy link
Member

Quick ping to @vishalnayak , let's merge this in once the max idle conns value is updated.

Copy link
Contributor

@briankassouf briankassouf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

@vishalnayak vishalnayak merged commit 158cce6 into master Mar 29, 2017
@vishalnayak vishalnayak deleted the s3-transport branch March 29, 2017 17:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants