Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SettingManagementStore not refreshing cache within the same UnitOfWork #4040

Closed
javiercampos opened this issue May 21, 2020 · 4 comments · Fixed by #4523
Closed

SettingManagementStore not refreshing cache within the same UnitOfWork #4040

javiercampos opened this issue May 21, 2020 · 4 comments · Fixed by #4523

Comments

@javiercampos
Copy link
Contributor

javiercampos commented May 21, 2020

I made this test for SettingManagementStore, which fails (on its test file):

/* .. declare and get service of `IUnitOfWorkManager` on constructor of the test class... */
[Fact]
public async Task SetThenGetOnUnitOfWorkAsync()
{
    using (_unitOfWorkManager.Begin())
    {
        var value = await _settingManagementStore.GetOrNullAsync("MySetting1",
            GlobalSettingValueProvider.ProviderName, null);
        value.ShouldBe("42");

        await _settingManagementStore.SetAsync("MySetting1", "43", GlobalSettingValueProvider.ProviderName,
            null);

        var valueAfterSet =
            await _settingManagementStore.GetOrNullAsync("MySetting1", GlobalSettingValueProvider.ProviderName,
                null);
        valueAfterSet.ShouldBe("43");
    }
}

Test result:

Shouldly.ShouldAssertException : valueAfterSet
    should be
"43"
    but was
"42"

That's because the cache is invalidated only when the UnitOfWork finished (it uses an ILocalEventHandler on SettingCacheItemInvalidator)

The scenario above is pretty common: if you have an API endpoint which updates a setting, it's VERY NORMAL wanting to use the new value from other service (the one using the setting), potentially in the same request, called from the same app service (with the same UnitOfWork).

The solution (which we apply on our projects) would be invalidating the cache (or updating it, although that might become inconsistent in case the UnitOfWork transaction failed to commit) on the Update path on the SettingManagementStore (this wouldn't work if fiddling directly on the database, but that should be not recommended on the documentation -when it's done-), so basically this on here:

public virtual async Task SetAsync(string name, string value, string providerName, string providerKey)
{
    var setting = await SettingRepository.FindAsync(name, providerName, providerKey);
    if (setting == null)
    {
        setting = new Setting(GuidGenerator.Create(), name, value, providerName, providerKey);
        await SettingRepository.InsertAsync(setting);
    }
    else
    {
        setting.Value = value;
        await SettingRepository.UpdateAsync(setting);
        
        /* Invalidate cache */
        var cacheKey = CalculateCacheKey(name, providerName, providerKey);
        await Cache.RemoveAsync(cacheKey);
    }
}

This makes the test above pass flawlessly.

If you find this solution acceptable I can provide a PR for it (including the test if you want it)

@javiercampos javiercampos changed the title ISettingManager not refreshing cache within the same UnitOfWork SettingManager not refreshing cache within the same UnitOfWork May 21, 2020
@javiercampos javiercampos changed the title SettingManager not refreshing cache within the same UnitOfWork SettingManagementStore not refreshing cache within the same UnitOfWork May 21, 2020
@hikalkan
Copy link
Member

What if we are using a distributed redis cache, you invalidate the cache but then your UOW failed on transaction commit? In this case, you've updated a shared cache, but actually not changed the data on the database.

Actually, vice verse is also a problem (save OUW but can't update the cache).

Your solution seems fine to me. Can you send a PR?

@hikalkan hikalkan added this to the 2.9 milestone May 21, 2020
@hikalkan
Copy link
Member

A solution would be overriding a cached value via a temporary dictionary stored in the current request/httpcontext.items or somewhere.

Let me think about that.

@javiercampos
Copy link
Contributor Author

You can maybe add a new local event handler which gets called upon exceptions when saving changes or committing the UOW transaction, and just keep track of what "keys" changed during that Uow (and invalidate them all on UOW completion failure).

I wouldn't tie that to the http request in any way... that'd prevent you from using it on BackgroundWorkers, or BackgroundJobs or any other kind of hosted service outside http requests.

You know, there are two things hard on programming: naming things and cache invalidation ;-)

@javiercampos
Copy link
Contributor Author

javiercampos commented May 22, 2020

I've committed the pull request because I think this is a better scenario than it was before... however, there are still some problems with this approach: mainly, if we do a: get (creates cache) -> set (invalidates cache) -> get (creates cache with updated setting) -> unit of work fails saving, the cache is kept in inconsistent state.

I believe the "right" approach would be tracking the cache changesduring a UnitOfWork and rolling them back or invalidating them in case the UnitOfWork fails committing/saving changes: this however would also pose problems on a "microservices" scenario, since the caching for uncommitted changes would spread to other services (in case of a distributed cache, like redis) while it's still uncommited.

The current scenario (with or without my pull request, which just solves a simple one), has many possible consistency problems... the biggest one of them being:

  • If we don't have a cached setting, and we do a set before a get (during a unit of work transaction), when we get it the cache stores the uncommited value. This has these problems:
    • On a distributed system, the distributed cache (e.g. redis) is storing an uncommitted value: other services (or instances of the same service) get a value for that setting that is not yet committed to the database: this is immensely problematic.
    • Without a distributed system, if the unit of work fails, the cache still stores the uncommitted value (because it's never invalidated on failure). Invalidating it on the local system would be easy (just track them), but then we get problem number 1.

So in my oppinion, the cache during a unit of work should not be distributed and be kept local to the process (or to the actual unit of work): maybe update the distributed cache upon correctly committing the transaction at the end of the UOW.

This is again problematic because ABP is mostly designed to have a UOW during for all requests, so this basically invalidates an "initial distributed cache" (since the first get during a UOW will always request to the database, and not to the distributed cache).

I see a way of solving this:

  • Have a local cache of settings on the UOW (or mapped to the UOW transaction) that gets populated from the distributed cache (instead of being empty by default) on the first get or set of a property during that UOW
  • Copy all those keys from the local cache to the distributed cache upon successful UOW committing
  • Destroy that local cache (invalidate all) upon UOW commiting failure/rolling back
  • Ignore this local cache if we are not on a UOW transaction and keep working with just the distributed cache (anything outside a UOW works just fine with the current approach)

But maybe this needs more brainstorming

Also, I'd have optional setting stores without a cache at all: depending on the usage, hitting the database every time a setting is requested may be the best scenario (rather than having a distributed cache, and taking most of the queries for settings are extremely simple and don't take a lot of information). Of course, since this is scenario-based, this should be an option (have the current SettingManagementStore be the default, but have an extension method on the service collection to configure a different one, something like UncachedManagementStore)

If I can be of any help, just let me know

@hikalkan hikalkan modified the milestones: 2.9, 3.0 May 29, 2020
@maliming maliming assigned maliming and unassigned hikalkan Jun 17, 2020
maliming added a commit that referenced this issue Jun 17, 2020
maliming added a commit that referenced this issue Jun 29, 2020
@hikalkan hikalkan modified the milestones: 3.0, 3.1 Jun 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment