Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use config for cache times instead of hardcoding #271

Merged
merged 3 commits into from
Mar 18, 2020

Conversation

ruflin
Copy link
Contributor

@ruflin ruflin commented Mar 17, 2020

Cache times are now also specified as time.Duration.

Cache times are now also specified as time.Duration.
@ruflin ruflin self-assigned this Mar 17, 2020
@ruflin ruflin requested review from mtojek and 1stvamp and removed request for 1stvamp March 18, 2020 08:16
Copy link
Contributor

@mtojek mtojek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not blocking at the moment, but I had to look deeper in the code to see the whole flow. I think there might a be a bug around an error path.
Consider a following scenario. The user hits /search endpoint, but the service is temporarily down (couple of secs). If the browser caches an error, it will last 10 minutes, until it's back to normal. I believe cache headers should be applied on happy path only.

@ruflin ruflin merged commit 5ea6192 into elastic:master Mar 18, 2020
@ruflin ruflin deleted the cache-configs branch March 18, 2020 08:50
ruflin added a commit to ruflin/package-registry that referenced this pull request Mar 18, 2020
Based on elastic#271 (review) in the error case, we should not send any cache headers for the error pages.
ruflin added a commit that referenced this pull request Mar 18, 2020
Based on #271 (review) in the error case, we should not send any cache headers for the error pages.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants