Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Atomically compute @CacheResult methods #45

Closed
ben-manes opened this issue Jul 25, 2015 · 2 comments
Closed

Atomically compute @CacheResult methods #45

ben-manes opened this issue Jul 25, 2015 · 2 comments
Assignees

Comments

@ben-manes
Copy link

Currently the implementation is not atomic by allowing multiple calls to the annotated method to be performed when computing the cache value. This is due to using a racy approach of get, compute, put operations. This may very well be intentional, as the JavaDoc of @CacheResult alludes to the implementation.

When a method annotated with {@link CacheResult} is invoked a {@link GeneratedCacheKey} will be generated and {@link Cache#get(Object)} is called before the annotated method actually executes. If a value is found in the cache it is returned and the annotated method is never actually executed. If no value is found the annotated method is invoked and the returned value is stored in the cache with the generated key.

The annotation never guarantees that the operation is performed atomically within the cache. This may be surprising for users, however, who make an incorrect assumption due to familiarity with self-populating caches.

The proposal is to use an EntryProcessor with Cache.invoke() to implement the three steps as a single atomic operation.

@cruftex
Copy link
Member

cruftex commented Jul 25, 2015

.... who make an incorrect assumption due to familiarity with self-populating caches.

The standard does not specify that self-populating caches do atomic calls to the CacheLoader.

@cruftex cruftex self-assigned this May 15, 2016
@gregrluck
Copy link
Member

There is no intention to make this atomic.

Ehcache has a self-populating cache based on a blocking cache, so that multiple threads attempting hitting an annotation for the same key would block while only one waited for the method body to execute, and put the value in the cache. Then all the other threads would see a cache entry.

The spec does not require this behaviour but an implementation can be configured that way.

We do not want the spec to always behave this way.

So I don't think we should change the RI here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants