-
Notifications
You must be signed in to change notification settings - Fork 109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] Utilize Microsoft.IO.RecyclableMemoryStream in flavor of creating new MemoryStream instances when serializing #241
Comments
If you have any free time, here is a link explaining about the allocation improvements https://medium.com/@matias.paulo84/recyclablememorystream-vs-memorystream-c4aa341324a9 |
Hi @viniciusvarzea , thanks for the suggestion! I'm looking into it and doing some benchmarking and it seems in fact quite good. Will update soon. |
Hello @jodydonetti, thank you for considering it. I have a large experience using this library, if you need some help, please count on me. |
Hi @viniciusvarzea awesome, thanks! |
Hi @viniciusvarzea so the implementation has been quite easy to make. public class FusionCacheSystemTextJsonSerializer
: IFusionCacheSerializer
{
private static readonly RecyclableMemoryStreamManager _manager = new RecyclableMemoryStreamManager();
public FusionCacheSystemTextJsonSerializer(JsonSerializerOptions? options = null)
{
_options = options;
}
private readonly JsonSerializerOptions? _options;
public byte[] Serialize<T>(T? obj)
{
return JsonSerializer.SerializeToUtf8Bytes<T?>(obj, _options);
}
public T? Deserialize<T>(byte[] data)
{
return JsonSerializer.Deserialize<T>(data, _options);
}
public async ValueTask<byte[]> SerializeAsync<T>(T? obj)
{
using var stream = _manager.GetStream();
await JsonSerializer.SerializeAsync<T?>(stream, obj, _options);
return stream.ToArray();
}
public async ValueTask<T?> DeserializeAsync<T>(byte[] data)
{
using var stream = _manager.GetStream(data);
return await JsonSerializer.DeserializeAsync<T>(stream, _options);
}
} For the I've create a benchmark, like this: [MemoryDiagnoser]
[Config(typeof(Config))]
public class SerializersBenchmark
{
private static Random MyRandom = new Random(2110);
private class SampleModel
{
public string Name { get; set; }
public int Age { get; set; }
public DateTime Date { get; set; }
public List<int> FavoriteNumbers { get; set; } = [];
public static SampleModel GenerateRandom()
{
var model = new SampleModel
{
Name = Guid.NewGuid().ToString("N"),
Age = MyRandom.Next(1, 100),
Date = DateTime.UtcNow,
};
for (int i = 0; i < 10; i++)
{
model.FavoriteNumbers.Add(MyRandom.Next(1, 1000));
}
return model;
}
}
private class Config : ManualConfig
{
public Config()
{
AddColumn(StatisticColumn.P95);
}
}
private FusionCacheSystemTextJsonSerializerOld _Old = new FusionCacheSystemTextJsonSerializerOld();
private FusionCacheSystemTextJsonSerializer _New = new FusionCacheSystemTextJsonSerializer();
private List<SampleModel> _Models = new List<SampleModel>();
private byte[] _Blob;
[Params(1, 100, 10_000)]
public int Size;
[GlobalSetup]
public void Setup()
{
for (int i = 0; i < Size; i++)
{
_Models.Add(SampleModel.GenerateRandom());
}
_Blob = _New.Serialize(_Models);
}
[Benchmark(Baseline = true)]
public async Task SerializeAsync_OLD()
{
await _Old.SerializeAsync(_Models).ConfigureAwait(false);
}
[Benchmark]
public async Task SerializeAsync_NEW()
{
await _New.SerializeAsync(_Models).ConfigureAwait(false);
}
[Benchmark]
public async Task DeserializeAsync_OLD()
{
await _Old.DeserializeAsync<List<SampleModel>>(_Blob).ConfigureAwait(false);
}
[Benchmark]
public async Task DeserializeAsync_NEW()
{
await _New.DeserializeAsync<List<SampleModel>>(_Blob).ConfigureAwait(false);
}
} and it shows some pretty good old vs new numbers:
Overall it looks good, and I'm inclined to use it for the others too and merge it for the new version. One question: are you aware of any edge case or strange scenario that I should know of, where it may have issues? Thanks! |
Update: applied the sme change to the Protobuf version and:
Nice 😬 |
Update 2: ServiceStack adapter also updated, here are the results:
Looking good! |
Hello @jodydonetti, prettty good man. Thank you for considering it. :) One observation from the library documentation and from my experience: "Important!: If you do not set MaximumFreeLargePoolBytes and MaximumFreeSmallPoolBytes there is the possibility for unbounded memory growth!" In my experience, the default options work very well, except for these 2 parameters. Setting it helps to avoid some memory leaks in heavy utilization scenaries. |
Thanks for the tip, any suggestion for reasonable values or a rationale to follow? |
@jodydonetti Since the redis does not works well with large values (i believe that the major part of cache entities are not so big), and the default 'BlockSize' of 128KB and the 'LargeBufferMultiple' of 1MB provided by the library, 16MB is a reasonable value for MaximumFreeSmallPoolBytes, 64MB is a reasonable value for MaximumFreeLargePoolBytes. Keep in mind that the library does not pre-allocate any buffer, the buffers are allocated as need, so, these limits are used just as upper limits to avoid memmory leaks. In a future release, we can also improve the DistributedCache performance, compressing the payloads before send it to redis. |
Btw I'm reading here and there are other considerations to look out for, for example "Most applications should not call ToArray", with this part in particular: since I am currently calling Furthermore I'm better understanding the whole
Did I got it right? Because given these constraints it feels like it can be a risky move to use it in this way. Did I got it wrong? Suggestions? |
Hi @jodydonetti. Sorry for the delay in responding, I was a little sick. About this: "Most applications shouldn't call ToArray", it's the way to go as long as you use stream up until the point of sending the data to redis. Due to the use of the IDistributedCache interface (which only has byte[] overloads) I think we cannot use streams in all sending methods, so at some point the ToArray() method must be called, but as the tests show benchmark, there are some benefits (allocation issue) even calling ToArray( ). About this: "but when using it without correctly configuring some options, it will probably cause a memory leak. but..." Since redis itself doesn't handle larger values very well, we can provide the default options that fit most scenarios and expose the user to the option to tune it or disable it at all. |
Hi @viniciusvarzea , I tried to look back at this. The perf boost in my opinion is not so insane to justify the added risk of using it maybe badly configured (truth be told, any perf boost I think would not justify an added risk). On the other hand, I'd like to give users the option of using it, because there's value in it. Therefore I'm adding support for it, but turning it off by default (at least for now) and making it opt-in, with the ability of turning it on by passing an instance of In this way there will not be surprises for users and, if enabled, it would mean the user (hopefully) took an informed decision. This is the current shape: public FusionCacheSystemTextJsonSerializer(JsonSerializerOptions? options = null)
{
_options = options;
}
public FusionCacheSystemTextJsonSerializer(JsonSerializerOptions? options, RecyclableMemoryStreamManager? streamManager)
: this(options)
{
_streamManager = streamManager;
} Thoughts? PS: I'm thinking of creating an options class for each serializer, to avoid having to create multiple overloads in the future for every new feature I may add, but I'm thinking about any potential DI usage with relation to ctor selection. |
Hello friend. I Will take a look at it.
…________________________________
De: Jody Donetti ***@***.***>
Enviado: sábado, 24 de agosto de 2024 17:36
Para: ZiggyCreatures/FusionCache ***@***.***>
Cc: viniciusvarzea ***@***.***>; Mention ***@***.***>
Assunto: Re: [ZiggyCreatures/FusionCache] [FEATURE] Utilize Microsoft.IO.RecyclableMemoryStream in flavor of creating new MemoryStream instances when serializing (Issue #241)
Hi @viniciusvarzea<https://github.com/viniciusvarzea> , I tried to look back at this.
The perf boost in my opinion is not so insane to justify the added risk of using it maybe badly configured (truth be told, any perf boost I think would not justify an added risk).
On the other hand, I'd like to give users the option of using it, because there's value in it.
Therefore I'm adding support for it, but turning it off by default (at least for now) and making it opt-in, with the ability of turning it on by passing an instance of RecyclableMemoryStreamManager in the ctor, so the options will be explicit.
In this way there will not be surprises for users and, if enabled, it would mean the user (hopefully) took an informed decision.
Thoughts?
—
Reply to this email directly, view it on GitHub<#241 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AEJGOMUTICBIGQAX724C5YTZTDVDHAVCNFSM6AAAAABHPYUOP2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMBYGUZTAMBYGQ>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Hello @jodydonetti Sorry for the delay in response, i think give to user this option is a perfect solution, in this way the user can tune the RecyclableMemoryStreamManager instance to fits the workload. Thank you for considering it. |
Hi @viniciusvarzea the feature has been developed, tested and is ready to be released probably tomorrow. |
Hi, v1.4.0 has been released 🥳 |
Problem
The serializers implemented are using a new MemoryStream instance every time it have to serialize or deserialize the cache entries (in async overloads), it can leave to high allocation rates (and time spent in GC).
Solution
How about implement in serializers the features from Microsoft.IO.RecyclableMemoryStream library?
The text was updated successfully, but these errors were encountered: