TTLCache is a simple key/value cache in golang with the following functions:
- Expiration of items based on time, or custom function
- Loader function to retrieve missing keys can be provided. Additional
Get
calls on the same key block while fetching is in progress (groupcache style). - Individual expiring time or global expiring time, you can choose
- Auto-Extending expiration on
Get
-or- DNS style TTL, seeSkipTTLExtensionOnHit(bool)
- Can trigger callback on key expiration
- Cleanup resources by calling
Close()
at end of lifecycle. - Thread-safe with comprehensive testing suite. This code is in production at bol.com on critical systems.
Note (issue #25): by default, due to historic reasons, the TTL will be reset on each cache hit and you need to explicitly configure the cache to use a TTL that will not get extended.
go get github.com/skailhq/ttlcache/v2
You can copy it as a full standalone demo program. The first snippet is basic usage, where the second exploits more options in the cache.
Basic:
package main
import (
"fmt"
"time"
"github.com/skailhq/ttlcache/v2"
)
var notFound = ttlcache.ErrNotFound
func main() {
var cache ttlcache.SimpleCache = ttlcache.NewCache()
cache.SetTTL(time.Duration(10 * time.Second))
cache.Set("MyKey", "MyValue")
cache.Set("MyNumber", 1000)
if val, err := cache.Get("MyKey"); err != notFound {
fmt.Printf("Got it: %s\n", val)
}
cache.Remove("MyNumber")
cache.Purge()
cache.Close()
}
Advanced:
package main
import (
"fmt"
"time"
"github.com/skailhq/ttlcache/v2"
)
var (
notFound = ttlcache.ErrNotFound
isClosed = ttlcache.ErrClosed
)
func main() {
newItemCallback := func(key string, value interface{}) {
fmt.Printf("New key(%s) added\n", key)
}
checkExpirationCallback := func(key string, value interface{}) bool {
if key == "key1" {
// if the key equals "key1", the value
// will not be allowed to expire
return false
}
// all other values are allowed to expire
return true
}
expirationCallback := func(key string, reason ttlcache.EvictionReason, value interface{}) {
fmt.Printf("This key(%s) has expired because of %s\n", key, reason)
}
loaderFunction := func(key string) (data interface{}, ttl time.Duration, err error) {
ttl = time.Second * 300
data, err = getFromNetwork(key)
return data, ttl, err
}
cache := ttlcache.NewCache()
cache.SetTTL(time.Duration(10 * time.Second))
cache.SetExpirationReasonCallback(expirationCallback)
cache.SetLoaderFunction(loaderFunction)
cache.SetNewItemCallback(newItemCallback)
cache.SetCheckExpirationCallback(checkExpirationCallback)
cache.SetCacheSizeLimit(2)
cache.Set("key", "value")
cache.SetWithTTL("keyWithTTL", "value", 10*time.Second)
if value, exists := cache.Get("key"); exists == nil {
fmt.Printf("Got value: %v\n", value)
}
count := cache.Count()
if result := cache.Remove("keyNNN"); result == notFound {
fmt.Printf("Not found, %d items left\n", count)
}
cache.Set("key6", "value")
cache.Set("key7", "value")
metrics := cache.GetMetrics()
fmt.Printf("Total inserted: %d\n", metrics.Inserted)
cache.Close()
}
func getFromNetwork(key string) (string, error) {
time.Sleep(time.Millisecond * 30)
return "value", nil
}
- The complexity of the current cache is already quite high. Therefore not all requests can be implemented in a straight-forward manner.
- The locking should be done only in the exported functions and
startExpirationProcessing
of the Cache struct. Else data races can occur or recursive locks are needed, which are both unwanted. - I prefer correct functionality over fast tests. It's ok for new tests to take seconds to proof something.
TTLCache was forked from wunderlist/ttlcache to add extra functions not avaiable in the original scope. The main differences are:
- A item can store any kind of object, previously, only strings could be saved
- Optionally, you can add callbacks too: check if a value should expire, be notified if a value expires, and be notified when new values are added to the cache
- The expiration can be either global or per item
- Items can exist without expiration time (time.Zero)
- Expirations and callbacks are realtime. Don't have a pooling time to check anymore, now it's done with a heap.
- A cache count limiter