-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CachedConfigCollection is written to disk on every page load #34
Comments
For users who have OPCache enabled but don't have APCu, this causes Wasted Memory within OPCache which eventually causes either full memory or opcache resets to free up the wasted memory.
As OPCache is not able to fragmentate its invalidated cache items, the only way it can clear this is by resetting opcache. As a result, the memory consuption of OPCache constantly increases with every request to the application until is consumes all memory made available. If OPCache is configured correctly, it will automatically reset to free up this 'Wasted Memory' (which has consumed the majority of memory) and has to reload the cached items again. If this happens during a high-load event it can cause OPCache to not be able to cache many of these items resulting in a stampede. Recommend either removing this entirely, or only updating the cached information if it has actually changed. (If this happens during a flush, then we can maybe just remove it entirely?). |
just found this as well..... 20ms delay.. with APCu it is also an issue as it cause 100% fragmentation after a while (depending on how many requests you cater for of course). Once you reach 100% your cache can no longer write. |
I’ve been playing around with some XHProf benchmarking around this today. The existing write-on-destruct behaviour allows the cache to store computed values ( |
maybe we can do something like, the only price to pay is to store a copy of the serialized config. that seems a much lower price than triggering a save, and when we have multiple requests incoming (ajax, etc) it may make quite a difference. public function __destruct()
{
// Ensure back-end cache is updated
if ($this->collection && $this->cachedCollection) {
if (serialize($this->collection) != $this->cachedCollection) {
$this->cache->set(self::CACHE_KEY, $this->collection);
}
// Prevent double-destruct
$this->collection = null;
}
} |
Seems fine to me, we have a patch doing almost the same in use for several months in several projects. Two small differences:
|
a hash seems like a better way. did you just md5 the whole serialized object? i'm thinking we can probably even just json_encode the whole thing since it's only storing "simple" values that can be expressed in yml |
Yeah, we just did it this way: md5(serialize($this->collection)); Edit: Good point, json_encode might be an alternative here. From my limited testing it is about 10^4 times faster. Although both are quite fast, for about 610K stored config on disk, md5(serialize(.)) needs 1-2ms |
i've updated the pr |
We’re running sites on servers with APCu enabled and are experiencing the same - cache fragmentation starts out low, but as requests ramp up it trends toward 100%. Once it reaches 100%, it’s no longer possible to write to the cache: krakjoe/apcu#369 (comment) It’s probably exacerbated by this issue, but I wonder if we should stop defaulting to using APCu? It seems it doesn’t cater well to a large number of small files, which is what caches in Silverstripe are primarily used for |
:-) good thing the solution is almost there then |
Linked PR has been merged, it will be released as part of 2.1.0 |
Just something I spotted while doing some profiling, not a major problem but AFAICT it’s an unnecessary filesystem operation if nothing in the cache has actually changed
https://github.com/silverstripe/silverstripe-config/blob/1.0.10/src/Collections/CachedConfigCollection.php#L150-L162
PRs
The text was updated successfully, but these errors were encountered: