Skip to content

Commit

Permalink
Simplify policy/hook APIs inside LRW handling
Browse files Browse the repository at this point in the history
  • Loading branch information
whitfin committed Jan 10, 2023
1 parent 93894e4 commit 809396f
Show file tree
Hide file tree
Showing 6 changed files with 66 additions and 80 deletions.
4 changes: 4 additions & 0 deletions lib/cachex/hook.ex
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,10 @@ defmodule Cachex.Hook do
def init(args),
do: {:ok, args}

@doc false
def child_spec(args),
do: super(args)

# allow overriding of init
defoverridable init: 1

Expand Down
73 changes: 51 additions & 22 deletions lib/cachex/policy/lrw.ex
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,36 @@ defmodule Cachex.Policy.LRW do
is determined by the touched time inside each cache record, which means that we
don't have to store any additional tables to keep track of access time.
There are several policies implemented using this algorithm:
There are several options recognised by this policy which can be passed inside the
limit structure when configuring your cache at startup:
* `Cachex.Policy.LRW.Evented`
* `Cachex.Policy.LRW.Scheduled`
* `:batch_size`
Although the functions in this module are public, the way they function internally
should be treated as private and subject to change at any point.
The batch size to use when paginating the cache to evict records. This defaults
to 100, which is typically going to be fine for most cases, but this option is
exposed in case there is need to customize it.
* `:frequency`
When this policy operates in scheduled mode, this option controls the frequency
with which bounds will be checked. This is specified in milliseconds, and will
default to once per second (1000). Feel free to tune this based on how strictly
you wish to enforce your cache limits.
* `:immediate`
Sets this policy to enforce bounds reactively. If this option is set to `true`,
bounds will be checked immediately when a write is made to the cache rather than
on a timed schedule. This has the result of being much more accurate with the
size of a cache, but has higher overhead due to listening on cache writes.
Setting this to `true` will disable the scheduled checks and thus the `:frequency`
option is ignored in this case.
While the overall behaviour of this policy should always result in the same outcome,
the way it operates internally may change. As such, the internals of this module
should not be relied upon and should not be considered part of the public API.
"""
use Cachex.Hook
use Cachex.Policy

# import macros
Expand All @@ -32,27 +53,35 @@ defmodule Cachex.Policy.LRW do
# Policy Behaviour #
####################

@doc false
# Backwards compatibility with < v3.5.x defaults
defdelegate hooks(limit), to: __MODULE__.Scheduled
@doc """
Configures hooks required to back this policy.
"""
def hooks(limit(options: options) = limit),
do: [
hook(
state: limit,
module:
case Keyword.get(options, :immediate) do
true -> __MODULE__.Evented
_not -> __MODULE__.Scheduled
end
)
]

#############
# Algorithm #
#############

@doc """
Enforces cache bounds based on the provided limit.
This function will enforce cache bounds using a least recently written (LRW)
eviction policy. It will trigger a Janitor purge to clear expired records
before attempting to trim older cache entries.
The `:batch_size` option can be set in the limit options to dictate how many
entries should be removed at once by this policy. This will default to a batch
size of 100 entries at a time.
"""
@spec enforce(Spec.cache(), Spec.limit()) :: :ok
def enforce(cache() = cache, limit() = limit) do
@doc false
# Enforces cache bounds based on the provided limit.
#
# This function will enforce cache bounds using a least recently written (LRW)
# eviction policy. It will trigger a Janitor purge to clear expired records
# before attempting to trim older cache entries.
#
# Please see module documentation for options available inside the limits.
@spec apply_limit(Spec.cache(), Spec.limit()) :: :ok
def apply_limit(cache() = cache, limit() = limit) do
limit(size: max_size, reclaim: reclaim, options: options) = limit

batch_size =
Expand Down
25 changes: 1 addition & 24 deletions lib/cachex/policy/lrw/evented.ex
Original file line number Diff line number Diff line change
Expand Up @@ -7,17 +7,8 @@ defmodule Cachex.Policy.LRW.Evented do
way. This policy enforces cache bounds and limits far more accurately than other
scheduled implementations, but comes at a higher memory cost (due to the message
passing between hooks).
The `:batch_size` option can be set in the limit options to dictate how many
entries should be removed at once by this policy. This will default to a batch
size of 100 entries at a time.
This eviction is relatively fast, and should keep the cache below bounds at most
times. Note that many writes in a very short amount of time can flood the cache,
but it should recover given a few seconds.
"""
use Cachex.Hook
use Cachex.Policy

# import macros
import Cachex.Spec
Expand All @@ -28,26 +19,12 @@ defmodule Cachex.Policy.LRW.Evented do
# actions which didn't trigger a write
@ignored [:error, :ignored]

####################
# Policy Behaviour #
####################

@doc """
Retrieves a list of hooks required to run against this policy.
"""
@spec hooks(Spec.limit()) :: [Spec.hook()]
def hooks(limit),
do: [hook(module: __MODULE__, state: limit)]

######################
# Hook Configuration #
######################

@doc """
Returns the actions this policy should listen on.
This returns as a `MapSet` to optimize the lookups
on actions to O(n) in the broadcasting algorithm.
"""
@spec actions :: [atom]
def actions,
Expand Down Expand Up @@ -87,7 +64,7 @@ defmodule Cachex.Policy.LRW.Evented do
# able to cause a net gain in cache size (so removals are also ignored).
def handle_notify(_message, {status, _value}, {cache, limit} = opts)
when status not in @ignored,
do: LRW.enforce(cache, limit) && {:ok, opts}
do: LRW.apply_limit(cache, limit) && {:ok, opts}

def handle_notify(_message, _result, opts),
do: {:ok, opts}
Expand Down
32 changes: 4 additions & 28 deletions lib/cachex/policy/lrw/scheduled.ex
Original file line number Diff line number Diff line change
@@ -1,45 +1,21 @@
defmodule Cachex.Policy.LRW.Scheduled do
@moduledoc """
Schedule least recently written eviction policy for Cachex.
Scheduled least recently written eviction policy for Cachex.
This module implements an evented LRW eviction policy for Cachex, using a basic
timer to trigger bound enforcement in a scheduled way. This has the same bound
This module implements a scheduled LRW eviction policy for Cachex, using a basic
timer to trigger bound enforcement in a repeatable way. This has the same bound
accuracy as `Cachex.Policy.LRW.Evented`, but has potential for some delay. The
main advantage of this implementation is a far lower memory cost due to not
using hook messages.
The `:batch_size` option can be set in the limit options to dictate how many
entries should be removed at once by this policy. This will default to a batch
size of 100 entries at a time.
The `:frequency` option can also be set in the limit options to specify how
frequently this policy will fire. This defaults to every few seconds (but may
change at any point).
This eviction is relatively fast, and should keep the cache below bounds at most
times. Note that many writes in a very short amount of time can flood the cache,
but it should recover given a few seconds.
"""
use Cachex.Hook
use Cachex.Policy

# import macros
import Cachex.Spec

# add internal aliases
alias Cachex.Policy.LRW

####################
# Policy Behaviour #
####################

@doc """
Retrieves a list of hooks required to run against this policy.
"""
@spec hooks(Spec.limit()) :: [Spec.hook()]
def hooks(limit),
do: [hook(module: __MODULE__, state: limit)]

######################
# Hook Configuration #
######################
Expand Down Expand Up @@ -73,7 +49,7 @@ defmodule Cachex.Policy.LRW.Scheduled do
# This will execute a bounds check on a cache and schedule a new check.
def handle_info(:policy_check, {cache, limit} = opts) do
unless is_nil(cache) do
LRW.enforce(cache, limit)
LRW.apply_limit(cache, limit)
end

schedule(limit) && {:noreply, opts}
Expand Down
8 changes: 4 additions & 4 deletions test/cachex/policy/lrw/evented_test.exs
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,9 @@ defmodule Cachex.Policy.LRW.EventedTest do
limit =
limit(
size: 100,
policy: Cachex.Policy.LRW.Evented,
policy: Cachex.Policy.LRW,
reclaim: 0.75,
options: [batch_size: 25]
options: [batch_size: 25, immediate: true]
)

# create a cache with a max size
Expand Down Expand Up @@ -112,9 +112,9 @@ defmodule Cachex.Policy.LRW.EventedTest do
limit =
limit(
size: 100,
policy: Cachex.Policy.LRW.Evented,
policy: Cachex.Policy.LRW,
reclaim: 0.3,
options: [batch_size: -1]
options: [batch_size: -1, immediate: true]
)

# create a cache with a max size
Expand Down
4 changes: 2 additions & 2 deletions test/cachex/policy/lrw/scheduled_test.exs
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ defmodule Cachex.Policy.LRW.ScheduledTest do
limit =
limit(
size: 100,
policy: Cachex.Policy.LRW.Scheduled,
policy: Cachex.Policy.LRW,
reclaim: 0.75,
options: [batch_size: 25, frequency: 100]
)
Expand Down Expand Up @@ -112,7 +112,7 @@ defmodule Cachex.Policy.LRW.ScheduledTest do
limit =
limit(
size: 100,
policy: Cachex.Policy.LRW.Scheduled,
policy: Cachex.Policy.LRW,
reclaim: 0.3,
options: [batch_size: -1, frequency: 100]
)
Expand Down

0 comments on commit 809396f

Please sign in to comment.