Skip to main content

lru

Stores key/value pairs in a lru in-memory cache. This cache is therefore reset every time the service restarts.

# Common config fields, showing default values
label: ""
lru:
cap: 1000
init_values: {}

This provides the lru package which implements a fixed-size thread safe LRU cache.

It uses the package lru

The field init_values can be used to pre-populate the memory cache with any number of key/value pairs:

cache_resources:
- label: foocache
lru:
cap: 1024
init_values:
foo: bar

These values can be overridden during execution.

Fields

cap

The cache maximum capacity (number of entries)

Type: int
Default: 1000

init_values

A table of key/value pairs that should be present in the cache on initialization. This can be used to create static lookup tables.

Type: object
Default: {}

# Examples

init_values:
Nickelback: "1995"
Spice Girls: "1994"
The Human League: "1977"

algorithm

the lru cache implementation

Type: string
Default: "standard"

OptionSummary
arcis an adaptive replacement cache. It tracks recent evictions as well as recent usage in both the frequent and recent caches. Its computational overhead is comparable to two_queues, but the memory overhead is linear with the size of the cache. ARC has been patented by IBM.
standardis a simple LRU cache. It is based on the LRU implementation in groupcache
two_queuestracks frequently used and recently used entries separately. This avoids a burst of accesses from taking out frequently used entries, at the cost of about 2x computational overhead and some extra bookkeeping.

two_queues_recent_ratio

is the ratio of the two_queues cache dedicated to recently added entries that have only been accessed once.

Type: float
Default: 0.25

two_queues_ghost_ratio

is the default ratio of ghost entries kept to track entries recently evicted on two_queues cache.

Type: float
Default: 0.5

optimistic

If true, we do not lock on read/write events. The lru package is thread-safe, however the ADD operation is not atomic.

Type: bool
Default: false