-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MemoryCache] Add possibility to disable linked cache entries #45592
Comments
Tagging subscribers to this area: @eerhardt, @maryamariyan Issue DetailsBackground and Motivation
In the following example, the using (var parent = cache.CreateEntry(key))
{
parent.SetValue(obj);
using (var child = cache.CreateEntry(key1))
{
child.SetValue(obj);
child.AddExpirationToken(expirationToken);
}
} Tracking internally uses Currently, this feature can not be disabled. Proposed APIAs suggested by @Tratcher in #45436 (comment), we can extend the existing Naming is hard and the best idea I currently have is namespace Microsoft.Extensions.Caching.Memory
{
public class MemoryCacheOptions
{
+ public bool TrackLinkedCacheEntries { get ; init; } = true;
}
} I am open to suggestions. Usage Examplesvar options = new MemoryCacheOptions
{
TrackLinkedCacheEntries = false
};
var cache = new MemoryCache(options); Alternative DesignsAs an alternative, we could add a new method to namespace Microsoft.Extensions.Caching.Memory
{
public class MemoryCache
{
public ICacheEntry CreateEntry(object key); // existing method
+ public ICacheEntry CreateUnlinkedEntry(object key);
} But there is a lot of existing extension methods that allow for adding new cache entries to the cache and we would most probably need to add new overloads for them... RisksIntroducing this API has no risks as long as we don't change the default settings (enabled by default). cc @eerhardt @maryamariyan @davidfowl
|
Note @davidfowl asked for this to be disabled by default and allow the few people that actually need the feature to opt-in with the new setting. Not sure if that meets the compat bar. |
For what it's worth, at scale we found a lot of the weight of our cache (somewhere in the 30-40% range) was overhead of the // static
private static readonly Action<object> ExpirationCallback = ExpirationTokensExpired;
// per entry
private readonly object _lock = new object();
private readonly MemoryCache _cache;
private IList<IDisposable> _expirationTokenRegistrations;
private IList<PostEvictionCallbackRegistration> _postEvictionCallbacks;
private IList<IChangeToken> _expirationTokens;
private TimeSpan? _absoluteExpirationRelativeToNow;
private TimeSpan? _slidingExpiration;
private long? _size;
private CacheEntry _previous;
private object _value;
private int _state;
public DateTimeOffset? AbsoluteExpiration { get; set; } In our slimmed down version removing a lot of the callback weight we weren't using anywhere (in the vein of this issue), we narrowed down to: // static
private static long s_currentDateIshTicks = DateTime.UtcNow.Ticks;
private static readonly Timer ExpirationTimeUpdater = new Timer(state => s_currentDateIshTicks = DateTime.UtcNow.Ticks, null, 1000, 1000);
// per entry
private long _absoluteExpirationTicks;
private int _accessCount;
private readonly uint _slidingSeconds;
public object Value { get; } Note: we actually added When you're storing a lot of things like integers or bools or any small data type, the weight of the reference overhead is quite large. What is the thinking on how we can reduce that? Note that most of it is inherent to the |
@NickCraver thanks for providing very valuable feedback! I've already removed 4 fields from CacheEntry (#45410) but it looks like we still have work to do here. I'll try to come up with more ideas |
@NickCraver I've sent #45962 to minimize the overhead (280 -> 224 bytes). I'll try to wrap my head if it would be possible to replace the nullable date-related fields with a single |
Since this is proposing a breaking change (making the default be non-tracked), going with one centralized option seems the most sensible. namespace Microsoft.Extensions.Caching.Memory
{
partial class MemoryCacheOptions
{
public bool TrackLinkedCacheEntries { get ; set; } = false;
}
} |
Background and Motivation
MemoryCache
by default tracks linked cache entries to allow for propagating options.In the following example, the
expirationToken
added forchild
is also applied toparent
:Tracking internally uses
AsyncLocal<T>
, which is expensive and adds non-trivial overhead. The cost of using it has popped out in the performance reports that we have received from our customers (#45436).Currently, this feature can not be disabled.
Proposed API
As suggested by @Tratcher in #45436 (comment), we can extend the existing
MemoryCacheOptions
with a flag that disables this behaviour.Naming is hard and the best idea I currently have is
TrackLinkedCacheEntries
:namespace Microsoft.Extensions.Caching.Memory { public class MemoryCacheOptions { + public bool TrackLinkedCacheEntries { get ; init; } = true; } }
I am open to suggestions.
Usage Examples
Alternative Designs
As an alternative, we could add a new method to
MemoryCache
that would allow for creating cache entries without tracking:namespace Microsoft.Extensions.Caching.Memory { public class MemoryCache { public ICacheEntry CreateEntry(object key); // existing method + public ICacheEntry CreateUnlinkedEntry(object key); }
But there is a lot of existing extension methods that allow for adding new cache entries to the cache and we would most probably need to add new overloads for them...
Risks
Introducing this API has no risks as long as we don't change the default settings (enabled by default).
cc @eerhardt @maryamariyan @davidfowl
The text was updated successfully, but these errors were encountered: