-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache may crash if a dependency change before the lazylock object is filled #352
Comments
The caching strategy was inspired by this post: http://www.superstarcoders.com/blogs/posts/micro-caching-in-asp-net.aspx, but was refactored to allow for DI support and a cache provider pattern. Basically, the LazyLock is there to ensure that only 1 request makes it through to fetch the data when the cache expires, while minimizing the amount of time the lock is in place. It was put there with the assumption that the ISiteMapNodeProvider implementations would likely contain database access code (or perhaps something even slower like a remote API call) and the client requests could potentially come in faster than the data can be retrieved, causing multiple data requests to occur. Read the article to get the full picture. I agree that this is a bug. For the fix, it seems like it would be best to return default(value) when the first !got check fails and the activator is null. That would prevent the exception from happening in this case. public sealed class LazyLock
{
private volatile bool got;
private object value;
public TValue Get<TValue>(Func<TValue> activator)
{
if (!got)
{
if (activator == null)
{
return default(TValue);
}
lock (this)
{
if (!got)
{
value = activator();
got = true;
}
}
}
return (TValue)value;
}
} Technically, it isn't a good design practice to allow null objects to be cached to allow for a way to check whether the cache contains the item in a reasonable way. So returning default(T) makes sense to indicate an uninitialized state. And then cut short the ItemRemoved event in the MicroCache because nothing was actually removed except for the LazyLock, which doesn't really count as a cached item. private void cacheProvider_ItemRemoved(object sender, MicroCacheItemRemovedEventArgs<T> e)
{
// Skip the event if the item is null, empty, or otherwise a default value,
// since nothing was actually put in the cache, so nothing was removed
if (!EqualityComparer<T>.Default.Equals(e.Item, default(T)))
{
// Cascade the event
OnCacheItemRemoved(e);
}
} Cutting the event short could be done in the ICacheProvider implementations, but that forces more shared behavior into the providers and makes them more difficult for 3rd parties to implement, not to mention more code to maintain within the project. The ItemRemoved event is no longer used by MvcSiteMapProvider. The original intent was to provide a way to call Clear() on the SiteMap instance to remove circular references, but the theory that the CLR couldn't automatically clean them up turned out to be false, not to mention, calling Clear() before every request was done with the SiteMap made it throw exceptions. This fix doesn't break the interfaces, seems to cover all of the bases, and could be rolled into a patch release. Thoughts? |
Ok I've try your fix and it seems good for me to start resolving the cache issues I've experienced. But it seems that it still remains some problems in other parts of the code.
for instance I've work around that problem by using an ICacheProvider implementation that store the sitemap in cache request before storing it in a more global cache provider like RuntimeCacheProvider. This pattern allow to have a consistent sitemap instance for the whole request Here the code for that request cache provider
|
The issue where the SiteMap could potentially be loaded multiple times per request was reported as #310 and I have already made the fix. However, it didn't meet the requirements of a patch so it is waiting in the v4.7 branch in commit 88b1127fcf6993147af2ada4c89711e39815d59a. The SiteMapSpooler request-caches the SiteMap instance so it will remain until all requests are finish with it. However, I don't think that addresses the entire problem because you can still have 2 SiteMap objects with the same key loaded at the same time. For that, we could potentially go back to using an instance ID (random Guid) that is added as part of the request cache key of SiteMap. I don't recall exactly when the instance id was removed from SiteMapNode, but it was replaced with using the SiteMapCacheKey instead of a random Guid so the equality check would pass. That instance id scheme was removed to fix the first part of #310. This definitely doesn't make sense to change back until the SiteMapSpooler is put into place. But perhaps when it is, fact that the SiteMapCacheKey is used as a fallback for the equality checking would be moot, and we could perhaps use an instance id on each SiteMap object that all of the request caching keys would automatically use. I think some more thought needs to go into exactly how to handle this. |
Ok I let you see how to handle all of this If you can at least release your branch patch_352 It will already allow me to remove some workaround code in my project. |
FYI - This is now in release v4.6.15, which is up on NuGet.org. |
Was this error fixed in any version of this library? |
This still seems to occur for dynamic pages in 4.6.26. Is there a way to just disable the cache? |
In AspNetCacheProvider and RuntimeCacheProvider when an item is removed the method Get of the LazyLock class is called with a null parameter.
This could crash is the Get method has never been called before on that LazyLock instance.
While it could be reasonable to think that the instance has been read at least once before it is deleted, in fact it's not always true.
In particular if one of the dependency change before the first call to that Get method. This first call seems to be performed by the MicroCache class. But nothing guaranty that there is no dependency change between the add to the cache and the first call (In our system with complex cache dependencies, I've experience the problem multiple times...)
One solution that I try and that works is to call lazy.Get(loadFunction); in MicroCache class before adding the lazylock instance to the cache provider.
Once you do that, using a LazyLock wrapper around the stored object seems useless... However it's also difficult to completely remove this object from the MvcSiteMap code without performing a breaking change on the API as it is exposed by the ICacheProvider interface.
Another solution could be to modify the LazyLock implementation to not crash when the activator parameter is null (by setting the value to default(TValue) ?), but this seems not totally working as I end up with a null when calling SiteMaps.Current later in my application...
Or once again choose a solution that will break your public API and abandon your LazyLock class and replace it by System.Lazy which support thread safe lock and take the activator method in the constructor, which avoid to be unable to know how to retrieve the value as its occurs during the removed from cache events... But I'm not sure that calling the loadfuntion provide to the MicroCache class during the item removed event of cache provider has a real utility. In fact as I was saying for the first solution, I'm not sure that using a lazy wrapper has a real utility...
Whatever solution you choose I'm pretty interested in order to remove our dirty workaround code...
The text was updated successfully, but these errors were encountered: