API#
Region#
- class dogpile.cache.region.CacheRegion(name: str | None = None, function_key_generator: ~typing.Callable[[...], ~typing.Callable[[...], str]] = <function function_key_generator>, function_multi_key_generator: ~typing.Callable[[...], ~typing.Callable[[...], ~typing.Sequence[str]]] = <function function_multi_key_generator>, key_mangler: ~typing.Callable[[str], str] | None = None, serializer: ~typing.Callable[[~typing.Any], bytes] | None = None, deserializer: ~typing.Callable[[bytes], ~typing.Any] | None = None, async_creation_runner: ~typing.Callable[[~dogpile.cache.region.CacheRegion, str, ~typing.Callable[[], ~typing.Any], ~dogpile.cache.api.CacheMutex], None] | None = None)#
Bases:
object
A front end to a particular cache backend.
- Parameters:
name¶ – Optional, a string name for the region. This isn’t used internally but can be accessed via the
.name
parameter, helpful for configuring a region from a config file.function_key_generator¶ –
Optional. A function that will produce a “cache key” given a data creation function and arguments, when using the
CacheRegion.cache_on_arguments()
method. The structure of this function should be two levels: given the data creation function, return a new function that generates the key based on the given arguments. Such as:def my_key_generator(namespace, fn, **kw): fname = fn.__name__ def generate_key(*arg): return namespace + "_" + fname + "_".join(str(s) for s in arg) return generate_key region = make_region( function_key_generator = my_key_generator ).configure( "dogpile.cache.dbm", expiration_time=300, arguments={ "filename":"file.dbm" } )
The
namespace
is that passed toCacheRegion.cache_on_arguments()
. It’s not consulted outside this function, so in fact can be of any form. For example, it can be passed as a tuple, used to specify arguments to pluck from **kw:def my_key_generator(namespace, fn): def generate_key(*arg, **kw): return ":".join( [kw[k] for k in namespace] + [str(x) for x in arg] ) return generate_key
Where the decorator might be used as:
@my_region.cache_on_arguments(namespace=('x', 'y')) def my_function(a, b, **kw): return my_data()
See also
function_key_generator()
- default key generatorkwarg_function_key_generator()
- optional gen that also uses keyword argumentsfunction_multi_key_generator¶ –
Optional. Similar to
function_key_generator
parameter, but it’s used inCacheRegion.cache_multi_on_arguments()
. Generated function should return list of keys. For example:def my_multi_key_generator(namespace, fn, **kw): namespace = fn.__name__ + (namespace or '') def generate_keys(*args): return [namespace + ':' + str(a) for a in args] return generate_keys
key_mangler¶ – Function which will be used on all incoming keys before passing to the backend. Defaults to
None
, in which case the key mangling function recommended by the cache backend will be used. A typical mangler is the SHA1 mangler found atsha1_mangle_key()
which coerces keys into a SHA1 hash, so that the string length is fixed. To disable all key mangling, set toFalse
. Another typical mangler is the built-in Python functionstr
, which can be used to convert non-string or Unicode keys to bytestrings, which is needed when using a backend such as bsddb or dbm under Python 2.x in conjunction with Unicode keys.serializer¶ –
function which will be applied to all values before passing to the backend. Defaults to
None
, in which case the serializer recommended by the backend will be used. Typical serializers includepickle.dumps
andjson.dumps
.Added in version 1.1.0.
deserializer¶ –
function which will be applied to all values returned by the backend. Defaults to
None
, in which case the deserializer recommended by the backend will be used. Typical deserializers includepickle.dumps
andjson.dumps
.Deserializers can raise a
api.CantDeserializeException
if they are unable to deserialize the value from the backend, indicating deserialization failed and that caching should proceed to re-generate a value. This allows an application that has been updated to gracefully re-cache old items which were persisted by a previous version of the application and can no longer be successfully deserialized.Added in version 1.1.0: added “deserializer” parameter
Added in version 1.2.0: added support for
api.CantDeserializeException
async_creation_runner¶ –
A callable that, when specified, will be passed to and called by dogpile.lock when there is a stale value present in the cache. It will be passed the mutex and is responsible releasing that mutex when finished. This can be used to defer the computation of expensive creator functions to later points in the future by way of, for example, a background thread, a long-running queue, or a task manager system like Celery.
For a specific example using async_creation_runner, new values can be created in a background thread like so:
import threading def async_creation_runner(cache, somekey, creator, mutex): ''' Used by dogpile.core:Lock when appropriate ''' def runner(): try: value = creator() cache.set(somekey, value) finally: mutex.release() thread = threading.Thread(target=runner) thread.start() region = make_region( async_creation_runner=async_creation_runner, ).configure( 'dogpile.cache.memcached', expiration_time=5, arguments={ 'url': '127.0.0.1:11211', 'distributed_lock': True, } )
Remember that the first request for a key with no associated value will always block; async_creator will not be invoked. However, subsequent requests for cached-but-expired values will still return promptly. They will be refreshed by whatever asynchronous means the provided async_creation_runner callable implements.
By default the async_creation_runner is disabled and is set to
None
.Added in version 0.4.2: added the async_creation_runner feature.
- property actual_backend#
Return the ultimate backend underneath any proxies.
The backend might be the result of one or more
proxy.wrap
applications. If so, derive the actual underlying backend.Added in version 0.6.6.
- cache_multi_on_arguments(namespace: str | None = None, expiration_time: float | ~typing.Callable[[], float] | None = None, should_cache_fn: ~typing.Callable[[~typing.Any], bool] | None = None, asdict: bool = False, to_str: ~typing.Callable[[~typing.Any], str] = <class 'str'>, function_multi_key_generator: ~typing.Callable[[...], ~typing.Callable[[...], ~typing.Sequence[str]]] | None = None) Callable[[Callable[[...], Sequence[Any]]], Callable[[...], Sequence[Any] | Mapping[str, Any]]] #
A function decorator that will cache multiple return values from the function using a sequence of keys derived from the function itself and the arguments passed to it.
This method is the “multiple key” analogue to the
CacheRegion.cache_on_arguments()
method.Example:
@someregion.cache_multi_on_arguments() def generate_something(*keys): return [ somedatabase.query(key) for key in keys ]
The decorated function can be called normally. The decorator will produce a list of cache keys using a mechanism similar to that of
CacheRegion.cache_on_arguments()
, combining the name of the function with the optional namespace and with the string form of each key. It will then consult the cache using the same mechanism as that ofCacheRegion.get_multi()
to retrieve all current values; the originally passed keys corresponding to those values which aren’t generated or need regeneration will be assembled into a new argument list, and the decorated function is then called with that subset of arguments.The returned result is a list:
result = generate_something("key1", "key2", "key3")
The decorator internally makes use of the
CacheRegion.get_or_create_multi()
method to access the cache and conditionally call the function. See that method for additional behavioral details.Unlike the
CacheRegion.cache_on_arguments()
method,CacheRegion.cache_multi_on_arguments()
works only with a single function signature, one which takes a simple list of keys as arguments.Like
CacheRegion.cache_on_arguments()
, the decorated function is also provided with aset()
method, which here accepts a mapping of keys and values to set in the cache:generate_something.set({"k1": "value1", "k2": "value2", "k3": "value3"})
…an
invalidate()
method, which has the effect of deleting the given sequence of keys using the same mechanism as that ofCacheRegion.delete_multi()
:generate_something.invalidate("k1", "k2", "k3")
…a
refresh()
method, which will call the creation function, cache the new values, and return them:values = generate_something.refresh("k1", "k2", "k3")
…and a
get()
method, which will return values based on the given arguments:values = generate_something.get("k1", "k2", "k3")
Added in version 0.5.3: Added
get()
method to decorated function.Parameters passed to
CacheRegion.cache_multi_on_arguments()
have the same meaning as those passed toCacheRegion.cache_on_arguments()
.- Parameters:
namespace¶ – optional string argument which will be established as part of each cache key.
expiration_time¶ – if not None, will override the normal expiration time. May be passed as an integer or a callable.
should_cache_fn¶ – passed to
CacheRegion.get_or_create_multi()
. This function is given a value as returned by the creator, and only if it returns True will that value be placed in the cache.asdict¶ –
if
True
, the decorated function should return its result as a dictionary of keys->values, and the final result of calling the decorated function will also be a dictionary. If left at its default value ofFalse
, the decorated function should return its result as a list of values, and the final result of calling the decorated function will also be a list.When
asdict==True
if the dictionary returned by the decorated function is missing keys, those keys will not be cached.to_str¶ – callable, will be called on each function argument in order to convert to a string. Defaults to
str()
. If the function accepts non-ascii unicode arguments on Python 2.x, theunicode()
builtin can be substituted, but note this will produce unicode cache keys which may require key mangling before reaching the cache.
Added in version 0.5.0.
- Parameters:
function_multi_key_generator¶ –
a function that will produce a list of keys. This function will supersede the one configured on the
CacheRegion
itself.Added in version 0.5.5.
- cache_on_arguments(namespace: str | None = None, expiration_time: float | ~typing.Callable[[], float] | None = None, should_cache_fn: ~typing.Callable[[~typing.Any], bool] | None = None, to_str: ~typing.Callable[[~typing.Any], str] = <class 'str'>, function_key_generator: ~typing.Callable[[...], ~typing.Callable[[...], str]] | None = None) Callable[[Callable[[...], Any]], Callable[[...], Any]] #
A function decorator that will cache the return value of the function using a key derived from the function itself and its arguments.
The decorator internally makes use of the
CacheRegion.get_or_create()
method to access the cache and conditionally call the function. See that method for additional behavioral details.E.g.:
@someregion.cache_on_arguments() def generate_something(x, y): return somedatabase.query(x, y)
The decorated function can then be called normally, where data will be pulled from the cache region unless a new value is needed:
result = generate_something(5, 6)
The function is also given an attribute
invalidate()
, which provides for invalidation of the value. Pass toinvalidate()
the same arguments you’d pass to the function itself to represent a particular value:generate_something.invalidate(5, 6)
Another attribute
set()
is added to provide extra caching possibilities relative to the function. This is a convenience method forCacheRegion.set()
which will store a given value directly without calling the decorated function. The value to be cached is passed as the first argument, and the arguments which would normally be passed to the function should follow:generate_something.set(3, 5, 6)
The above example is equivalent to calling
generate_something(5, 6)
, if the function were to produce the value3
as the value to be cached.Added in version 0.4.1: Added
set()
method to decorated function.Similar to
set()
isrefresh()
. This attribute will invoke the decorated function and populate a new value into the cache with the new value, as well as returning that value:newvalue = generate_something.refresh(5, 6)
Added in version 0.5.0: Added
refresh()
method to decorated function.original()
on other hand will invoke the decorated function without any caching:newvalue = generate_something.original(5, 6)
Added in version 0.6.0: Added
original()
method to decorated function.Lastly, the
get()
method returns either the value cached for the given key, or the tokenNO_VALUE
if no such key exists:value = generate_something.get(5, 6)
Added in version 0.5.3: Added
get()
method to decorated function.The default key generation will use the name of the function, the module name for the function, the arguments passed, as well as an optional “namespace” parameter in order to generate a cache key.
Given a function
one
inside the modulemyapp.tools
:@region.cache_on_arguments(namespace="foo") def one(a, b): return a + b
Above, calling
one(3, 4)
will produce a cache key as follows:myapp.tools:one|foo|3 4
The key generator will ignore an initial argument of
self
orcls
, making the decorator suitable (with caveats) for use with instance or class methods. Given the example:class MyClass: @region.cache_on_arguments(namespace="foo") def one(self, a, b): return a + b
The cache key above for
MyClass().one(3, 4)
will again produce the same cache key ofmyapp.tools:one|foo|3 4
- the nameself
is skipped.The
namespace
parameter is optional, and is used normally to disambiguate two functions of the same name within the same module, as can occur when decorating instance or class methods as below:class MyClass: @region.cache_on_arguments(namespace='MC') def somemethod(self, x, y): "" class MyOtherClass: @region.cache_on_arguments(namespace='MOC') def somemethod(self, x, y): ""
Above, the
namespace
parameter disambiguates betweensomemethod
onMyClass
andMyOtherClass
. Python class declaration mechanics otherwise prevent the decorator from having awareness of theMyClass
andMyOtherClass
names, as the function is received by the decorator before it becomes an instance method.The function key generation can be entirely replaced on a per-region basis using the
function_key_generator
argument present onmake_region()
andCacheRegion
. If defaults tofunction_key_generator()
.- Parameters:
namespace¶ – optional string argument which will be established as part of the cache key. This may be needed to disambiguate functions of the same name within the same source file, such as those associated with classes - note that the decorator itself can’t see the parent class on a function as the class is being declared.
expiration_time¶ –
if not None, will override the normal expiration time.
May be specified as a callable, taking no arguments, that returns a value to be used as the
expiration_time
. This callable will be called whenever the decorated function itself is called, in caching or retrieving. Thus, this can be used to determine a dynamic expiration time for the cached function result. Example use cases include “cache the result until the end of the day, week or time period” and “cache until a certain date or time passes”.should_cache_fn¶ – passed to
CacheRegion.get_or_create()
.to_str¶ – callable, will be called on each function argument in order to convert to a string. Defaults to
str()
. If the function accepts non-ascii unicode arguments on Python 2.x, theunicode()
builtin can be substituted, but note this will produce unicode cache keys which may require key mangling before reaching the cache.function_key_generator¶ – a function that will produce a “cache key”. This function will supersede the one configured on the
CacheRegion
itself.
- configure(backend: str, expiration_time: float | timedelta | None = None, arguments: Mapping[str, Any] | None = None, _config_argument_dict: Mapping[str, Any] | None = None, _config_prefix: str | None = None, wrap: Sequence[ProxyBackend | Type[ProxyBackend]] = (), replace_existing_backend: bool = False, region_invalidator: RegionInvalidationStrategy | None = None) Self #
Configure a
CacheRegion
.The
CacheRegion
itself is returned.- Parameters:
backend¶ – Required. This is the name of the
CacheBackend
to use, and is resolved by loading the class from thedogpile.cache
entrypoint.expiration_time¶ –
Optional. The expiration time passed to the dogpile system. May be passed as an integer number of seconds, or as a
datetime.timedelta
value.The
CacheRegion.get_or_create()
method as well as theCacheRegion.cache_on_arguments()
decorator (though note: not theCacheRegion.get()
method) will call upon the value creation function after this time period has passed since the last generation.arguments¶ – Optional. The structure here is passed directly to the constructor of the
CacheBackend
in use, though is typically a dictionary.wrap¶ –
Optional. A list of
ProxyBackend
classes and/or instances, each of which will be applied in a chain to ultimately wrap the original backend, so that custom functionality augmentation can be applied.Added in version 0.5.0.
See also
replace_existing_backend¶ –
if True, the existing cache backend will be replaced. Without this flag, an exception is raised if a backend is already configured.
Added in version 0.5.7.
region_invalidator¶ –
Optional. Override default invalidation strategy with custom implementation of
RegionInvalidationStrategy
.Added in version 0.6.2.
- configure_from_config(config_dict, prefix)#
Configure from a configuration dictionary and a prefix.
Example:
local_region = make_region() memcached_region = make_region() # regions are ready to use for function # decorators, but not yet for actual caching # later, when config is available myconfig = { "cache.local.backend":"dogpile.cache.dbm", "cache.local.arguments.filename":"/path/to/dbmfile.dbm", "cache.memcached.backend":"dogpile.cache.pylibmc", "cache.memcached.arguments.url":"127.0.0.1, 10.0.0.1", } local_region.configure_from_config(myconfig, "cache.local.") memcached_region.configure_from_config(myconfig, "cache.memcached.")
- delete(key: str) None #
Remove a value from the cache.
This operation is idempotent (can be called multiple times, or on a non-existent key, safely)
- delete_multi(keys: Sequence[str]) None #
Remove multiple values from the cache.
This operation is idempotent (can be called multiple times, or on a non-existent key, safely)
Added in version 0.5.0.
- get(key: str, expiration_time: float | None = None, ignore_expiration: bool = False) Any | Literal[NoValue.NO_VALUE] #
Return a value from the cache, based on the given key.
If the value is not present, the method returns the token
api.NO_VALUE
.api.NO_VALUE
evaluates to False, but is separate fromNone
to distinguish between a cached value ofNone
.By default, the configured expiration time of the
CacheRegion
, or alternatively the expiration time supplied by theexpiration_time
argument, is tested against the creation time of the retrieved value versus the current time (as reported bytime.time()
). If stale, the cached value is ignored and theapi.NO_VALUE
token is returned. Passing the flagignore_expiration=True
bypasses the expiration time check.Changed in version 0.3.0:
CacheRegion.get()
now checks the value’s creation time against the expiration time, rather than returning the value unconditionally.The method also interprets the cached value in terms of the current “invalidation” time as set by the
invalidate()
method. If a value is present, but its creation time is older than the current invalidation time, theapi.NO_VALUE
token is returned. Passing the flagignore_expiration=True
bypasses the invalidation time check.Added in version 0.3.0: Support for the
CacheRegion.invalidate()
method.- Parameters:
key¶ – Key to be retrieved. While it’s typical for a key to be a string, it is ultimately passed directly down to the cache backend, before being optionally processed by the key_mangler function, so can be of any type recognized by the backend or by the key_mangler function, if present.
expiration_time¶ –
Optional expiration time value which will supersede that configured on the
CacheRegion
itself.Note
The
CacheRegion.get.expiration_time
argument is not persisted in the cache and is relevant only to this specific cache retrieval operation, relative to the creation time stored with the existing cached value. Subsequent calls toCacheRegion.get()
are not affected by this value.Added in version 0.3.0.
ignore_expiration¶ –
if
True
, the value is returned from the cache if present, regardless of configured expiration times or whether or notinvalidate()
was called.Added in version 0.3.0.
- get_multi(keys, expiration_time=None, ignore_expiration=False)#
Return multiple values from the cache, based on the given keys.
Returns values as a list matching the keys given.
E.g.:
values = region.get_multi(["one", "two", "three"])
To convert values to a dictionary, use
zip()
:keys = ["one", "two", "three"] values = region.get_multi(keys) dictionary = dict(zip(keys, values))
Keys which aren’t present in the list are returned as the
NO_VALUE
token.NO_VALUE
evaluates to False, but is separate fromNone
to distinguish between a cached value ofNone
.By default, the configured expiration time of the
CacheRegion
, or alternatively the expiration time supplied by theexpiration_time
argument, is tested against the creation time of the retrieved value versus the current time (as reported bytime.time()
). If stale, the cached value is ignored and theNO_VALUE
token is returned. Passing the flagignore_expiration=True
bypasses the expiration time check.Added in version 0.5.0.
- get_or_create(key: str, creator: Callable[[...], Any], expiration_time: float | None = None, should_cache_fn: Callable[[Any], bool] | None = None, creator_args: Tuple[Any, Mapping[str, Any]] | None = None) Any #
Return a cached value based on the given key.
If the value does not exist or is considered to be expired based on its creation time, the given creation function may or may not be used to recreate the value and persist the newly generated value in the cache.
Whether or not the function is used depends on if the dogpile lock can be acquired or not. If it can’t, it means a different thread or process is already running a creation function for this key against the cache. When the dogpile lock cannot be acquired, the method will block if no previous value is available, until the lock is released and a new value available. If a previous value is available, that value is returned immediately without blocking.
If the
invalidate()
method has been called, and the retrieved value’s timestamp is older than the invalidation timestamp, the value is unconditionally prevented from being returned. The method will attempt to acquire the dogpile lock to generate a new value, or will wait until the lock is released to return the new value.Changed in version 0.3.0: The value is unconditionally regenerated if the creation time is older than the last call to
invalidate()
.- Parameters:
key¶ – Key to be retrieved. While it’s typical for a key to be a string, it is ultimately passed directly down to the cache backend, before being optionally processed by the key_mangler function, so can be of any type recognized by the backend or by the key_mangler function, if present.
creator¶ – function which creates a new value.
creator_args¶ –
optional tuple of (args, kwargs) that will be passed to the creator function if present.
Added in version 0.7.0.
expiration_time¶ –
optional expiration time which will override the expiration time already configured on this
CacheRegion
if not None. To set no expiration, use the value -1.Note
The
CacheRegion.get_or_create.expiration_time
argument is not persisted in the cache and is relevant only to this specific cache retrieval operation, relative to the creation time stored with the existing cached value. Subsequent calls toCacheRegion.get_or_create()
are not affected by this value.should_cache_fn¶ –
optional callable function which will receive the value returned by the “creator”, and will then return True or False, indicating if the value should actually be cached or not. If it returns False, the value is still returned, but isn’t cached. E.g.:
def dont_cache_none(value): return value is not None value = region.get_or_create("some key", create_value, should_cache_fn=dont_cache_none)
Above, the function returns the value of create_value() if the cache is invalid, however if the return value is None, it won’t be cached.
Added in version 0.4.3.
See also
CacheRegion.cache_on_arguments()
- appliesget_or_create()
to any function using a decorator.CacheRegion.get_or_create_multi()
- multiple key/value version
- get_or_create_multi(keys: Sequence[str], creator: Callable[[], Any], expiration_time: float | None = None, should_cache_fn: Callable[[Any], bool] | None = None) Sequence[Any] #
Return a sequence of cached values based on a sequence of keys.
The behavior for generation of values based on keys corresponds to that of
Region.get_or_create()
, with the exception that thecreator()
function may be asked to generate any subset of the given keys. The list of keys to be generated is passed tocreator()
, andcreator()
should return the generated values as a sequence corresponding to the order of the keys.The method uses the same approach as
Region.get_multi()
andRegion.set_multi()
to get and set values from the backend.If you are using a
CacheBackend
orProxyBackend
that modifies values, take note this function invokes.set_multi()
for newly generated values using the same values it returns to the calling function. A correct implementation of.set_multi()
will not modify values in-place on the submittedmapping
dict.- Parameters:
keys¶ – Sequence of keys to be retrieved.
creator¶ – function which accepts a sequence of keys and returns a sequence of new values.
expiration_time¶ – optional expiration time which will override the expiration time already configured on this
CacheRegion
if not None. To set no expiration, use the value -1.should_cache_fn¶ – optional callable function which will receive each value returned by the “creator”, and will then return True or False, indicating if the value should actually be cached or not. If it returns False, the value is still returned, but isn’t cached.
Added in version 0.5.0.
- get_value_metadata(key: str, expiration_time: float | None = None, ignore_expiration: bool = False) CachedValue | None #
Return the
CachedValue
object directly from the cache.This is the enclosing datastructure that includes the value as well as the metadata, including the timestamp when the value was cached. Convenience accessors on
CachedValue
also provide for common data such asCachedValue.cached_time
andCachedValue.age
.Added in version 1.3.: Added
CacheRegion.get_value_metadata()
- invalidate(hard=True)#
Invalidate this
CacheRegion
.The default invalidation system works by setting a current timestamp (using
time.time()
) representing the “minimum creation time” for a value. Any retrieved value whose creation time is prior to this timestamp is considered to be stale. It does not affect the data in the cache in any way, and is local to this instance of :class:`.CacheRegion`.Warning
The
CacheRegion.invalidate()
method’s default mode of operation is to set a timestamp local to this CacheRegion in this Python process only. It does not impact other Python processes or regions as the timestamp is only stored locally in memory. To implement invalidation where the timestamp is stored in the cache or similar so that all Python processes can be affected by an invalidation timestamp, implement a customRegionInvalidationStrategy
.Once set, the invalidation time is honored by the
CacheRegion.get_or_create()
,CacheRegion.get_or_create_multi()
andCacheRegion.get()
methods.The method supports both “hard” and “soft” invalidation options. With “hard” invalidation,
CacheRegion.get_or_create()
will force an immediate regeneration of the value which all getters will wait for. With “soft” invalidation, subsequent getters will return the “old” value until the new one is available.Usage of “soft” invalidation requires that the region or the method is given a non-None expiration time.
Added in version 0.3.0.
- Parameters:
hard¶ –
if True, cache values will all require immediate regeneration; dogpile logic won’t be used. If False, the creation time of existing values will be pushed back before the expiration time so that a return+regen will be invoked.
Added in version 0.5.1.
- property is_configured#
Return True if the backend has been configured via the
CacheRegion.configure()
method already.Added in version 0.5.1.
- key_is_locked(key: str) bool #
Return True if a particular cache key is currently being generated within the dogpile lock.
Added in version 1.1.2.
- set(key: str, value: Any) None #
Place a new value in the cache under the given key.
- set_multi(mapping: Mapping[str, Any]) None #
Place new values in the cache under the given keys.
- wrap(proxy: ProxyBackend | Type[ProxyBackend]) None #
Takes a ProxyBackend instance or class and wraps the attached backend.
- class dogpile.cache.region.RegionInvalidationStrategy#
Bases:
object
Region invalidation strategy interface
Implement this interface and pass implementation instance to
CacheRegion.configure()
to override default region invalidation.Example:
class CustomInvalidationStrategy(RegionInvalidationStrategy): def __init__(self): self._soft_invalidated = None self._hard_invalidated = None def invalidate(self, hard=None): if hard: self._soft_invalidated = None self._hard_invalidated = time.time() else: self._soft_invalidated = time.time() self._hard_invalidated = None def is_invalidated(self, timestamp): return ((self._soft_invalidated and timestamp < self._soft_invalidated) or (self._hard_invalidated and timestamp < self._hard_invalidated)) def was_hard_invalidated(self): return bool(self._hard_invalidated) def is_hard_invalidated(self, timestamp): return (self._hard_invalidated and timestamp < self._hard_invalidated) def was_soft_invalidated(self): return bool(self._soft_invalidated) def is_soft_invalidated(self, timestamp): return (self._soft_invalidated and timestamp < self._soft_invalidated)
The custom implementation is injected into a
CacheRegion
at configure time using theCacheRegion.configure.region_invalidator
parameter:region = CacheRegion() region = region.configure(region_invalidator=CustomInvalidationStrategy()) # noqa
Invalidation strategies that wish to have access to the
CacheRegion
itself should construct the invalidator given the region as an argument:class MyInvalidator(RegionInvalidationStrategy): def __init__(self, region): self.region = region # ... # ... region = CacheRegion() region = region.configure(region_invalidator=MyInvalidator(region))
Added in version 0.6.2.
- invalidate(hard: bool = True) None #
Region invalidation.
CacheRegion
propagated call. The default invalidation system works by setting a current timestamp (usingtime.time()
) to consider all older timestamps effectively invalidated.
- is_hard_invalidated(timestamp: float) bool #
Check timestamp to determine if it was hard invalidated.
- Returns:
Boolean. True if
timestamp
is older than the last region invalidation time and region is invalidated in hard mode.
- is_invalidated(timestamp: float) bool #
Check timestamp to determine if it was invalidated.
- Returns:
Boolean. True if
timestamp
is older than the last region invalidation time.
- is_soft_invalidated(timestamp: float) bool #
Check timestamp to determine if it was soft invalidated.
- Returns:
Boolean. True if
timestamp
is older than the last region invalidation time and region is invalidated in soft mode.
- was_hard_invalidated() bool #
Indicate the region was invalidated in hard mode.
- Returns:
Boolean. True if region was invalidated in hard mode.
- was_soft_invalidated() bool #
Indicate the region was invalidated in soft mode.
- Returns:
Boolean. True if region was invalidated in soft mode.
- dogpile.cache.region.make_region(*arg: Any, **kw: Any) CacheRegion #
Instantiate a new
CacheRegion
.Currently,
make_region()
is a passthrough toCacheRegion
. See that class for constructor arguments.
- dogpile.cache.region.value_version = 2#
An integer placed in the
CachedValue
so that new versions of dogpile.cache can detect cached values from a previous, backwards-incompatible version.
Backend API#
See the section Creating Backends for details on how to register new backends or Changing Backend Behavior for details on how to alter the behavior of existing backends.
- dogpile.cache.api.BackendFormatted#
Describes the type returned from the
CacheBackend.get()
method.alias of
CachedValue
|Literal
[NO_VALUE
] |bytes
- dogpile.cache.api.BackendSetType#
Describes the value argument passed to the
CacheBackend.set()
method.alias of
CachedValue
|bytes
- class dogpile.cache.api.BytesBackend(arguments: Mapping[str, Any])#
Bases:
DefaultSerialization
,CacheBackend
A cache backend that receives and returns series of bytes.
This backend only supports the “serialized” form of values; subclasses should implement
BytesBackend.get_serialized()
,BytesBackend.get_serialized_multi()
,BytesBackend.set_serialized()
,BytesBackend.set_serialized_multi()
.Added in version 1.1.
- get_serialized(key: str) bytes | Literal[NoValue.NO_VALUE] #
Retrieve a serialized value from the cache.
- Parameters:
key¶ – String key that was passed to the
CacheRegion.get()
method, which will also be processed by the “key mangling” function if one was present.- Returns:
a bytes object, or
NO_VALUE
constant if not present.
Added in version 1.1.
- get_serialized_multi(keys: Sequence[str]) Sequence[bytes | Literal[NoValue.NO_VALUE]] #
Retrieve multiple serialized values from the cache.
- Parameters:
keys¶ – sequence of string keys that was passed to the
CacheRegion.get_multi()
method, which will also be processed by the “key mangling” function if one was present.- Returns:
list of bytes objects
Added in version 1.1.
- set_serialized(key: str, value: bytes) None #
Set a serialized value in the cache.
- Parameters:
key¶ – String key that was passed to the
CacheRegion.set()
method, which will also be processed by the “key mangling” function if one was present.value¶ – a bytes object to be stored.
Added in version 1.1.
- set_serialized_multi(mapping: Mapping[str, bytes]) None #
Set multiple serialized values in the cache.
- Parameters:
mapping¶ – a dict in which the key will be whatever was passed to the
CacheRegion.set_multi()
method, processed by the “key mangling” function, if any.
When implementing a new
CacheBackend
or cutomizing viaProxyBackend
, be aware that when this method is invoked byRegion.get_or_create_multi()
, themapping
values are the same ones returned to the upstream caller. If the subclass alters the values in any way, it must not do so ‘in-place’ on themapping
dict – that will have the undesirable effect of modifying the returned values as well.Added in version 1.1.
- class dogpile.cache.api.CacheBackend(arguments: Mapping[str, Any])#
Bases:
object
Base class for backend implementations.
Backends which set and get Python object values should subclass this backend. For backends in which the value that’s stored is ultimately a stream of bytes, the
BytesBackend
should be used.- delete(key: str) None #
Delete a value from the cache.
- Parameters:
key¶ – String key that was passed to the
CacheRegion.delete()
method, which will also be processed by the “key mangling” function if one was present.
The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.
- delete_multi(keys: Sequence[str]) None #
Delete multiple values from the cache.
- Parameters:
keys¶ – sequence of string keys that was passed to the
CacheRegion.delete_multi()
method, which will also be processed by the “key mangling” function if one was present.
The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.
Added in version 0.5.0.
- deserializer: None | Callable[[bytes], Any] = None#
deserializer function that will be used by default if not overridden by the region.
Added in version 1.1.
- get(key: str) CachedValue | Literal[NoValue.NO_VALUE] | bytes #
Retrieve an optionally serialized value from the cache.
- Parameters:
key¶ – String key that was passed to the
CacheRegion.get()
method, which will also be processed by the “key mangling” function if one was present.- Returns:
the Python object that corresponds to what was established via the
CacheBackend.set()
method, or theNO_VALUE
constant if not present.
If a serializer is in use, this method will only be called if the
CacheBackend.get_serialized()
method is not overridden.
- get_multi(keys: Sequence[str]) Sequence[CachedValue | Literal[NoValue.NO_VALUE] | bytes] #
Retrieve multiple optionally serialized values from the cache.
- Parameters:
keys¶ – sequence of string keys that was passed to the
CacheRegion.get_multi()
method, which will also be processed by the “key mangling” function if one was present.
- :return a list of values as would be returned
individually via the
CacheBackend.get()
method, corresponding to the list of keys given.
If a serializer is in use, this method will only be called if the
CacheBackend.get_serialized_multi()
method is not overridden.Added in version 0.5.0.
- get_mutex(key: str) CacheMutex | None #
Return an optional mutexing object for the given key.
This object need only provide an
acquire()
andrelease()
method.May return
None
, in which case the dogpile lock will use a regularthreading.Lock
object to mutex concurrent threads for value creation. The default implementation returnsNone
.Different backends may want to provide various kinds of “mutex” objects, such as those which link to lock files, distributed mutexes, memcached semaphores, etc. Whatever kind of system is best suited for the scope and behavior of the caching backend.
A mutex that takes the key into account will allow multiple regenerate operations across keys to proceed simultaneously, while a mutex that does not will serialize regenerate operations to just one at a time across all keys in the region. The latter approach, or a variant that involves a modulus of the given key’s hash value, can be used as a means of throttling the total number of value recreation operations that may proceed at one time.
- get_serialized(key: str) bytes | Literal[NoValue.NO_VALUE] #
Retrieve a serialized value from the cache.
- Parameters:
key¶ – String key that was passed to the
CacheRegion.get()
method, which will also be processed by the “key mangling” function if one was present.- Returns:
a bytes object, or
NO_VALUE
constant if not present.
The default implementation of this method for
CacheBackend
returns the value of theCacheBackend.get()
method.Added in version 1.1.
See also
- get_serialized_multi(keys: Sequence[str]) Sequence[bytes | Literal[NoValue.NO_VALUE]] #
Retrieve multiple serialized values from the cache.
- Parameters:
keys¶ – sequence of string keys that was passed to the
CacheRegion.get_multi()
method, which will also be processed by the “key mangling” function if one was present.- Returns:
list of bytes objects
The default implementation of this method for
CacheBackend
returns the value of theCacheBackend.get_multi()
method.Added in version 1.1.
See also
- key_mangler: Callable[[str], str] | None = None#
Key mangling function.
May be None, or otherwise declared as an ordinary instance method.
- serializer: None | Callable[[Any], bytes] = None#
Serializer function that will be used by default if not overridden by the region.
Added in version 1.1.
- set(key: str, value: CachedValue | bytes) None #
Set an optionally serialized value in the cache.
- Parameters:
key¶ – String key that was passed to the
CacheRegion.set()
method, which will also be processed by the “key mangling” function if one was present.value¶ – The optionally serialized
CachedValue
object. May be an instance ofCachedValue
or a bytes object depending on if a serializer is in use with the region and if theCacheBackend.set_serialized()
method is not overridden.
See also
- set_multi(mapping: Mapping[str, CachedValue | bytes]) None #
Set multiple values in the cache.
- Parameters:
mapping¶ – a dict in which the key will be whatever was passed to the
CacheRegion.set_multi()
method, processed by the “key mangling” function, if any.
When implementing a new
CacheBackend
or cutomizing viaProxyBackend
, be aware that when this method is invoked byRegion.get_or_create_multi()
, themapping
values are the same ones returned to the upstream caller. If the subclass alters the values in any way, it must not do so ‘in-place’ on themapping
dict – that will have the undesirable effect of modifying the returned values as well.If a serializer is in use, this method will only be called if the
CacheBackend.set_serialized_multi()
method is not overridden.Added in version 0.5.0.
- set_serialized(key: str, value: bytes) None #
Set a serialized value in the cache.
- Parameters:
key¶ – String key that was passed to the
CacheRegion.set()
method, which will also be processed by the “key mangling” function if one was present.value¶ – a bytes object to be stored.
The default implementation of this method for
CacheBackend
calls upon theCacheBackend.set()
method.Added in version 1.1.
See also
- set_serialized_multi(mapping: Mapping[str, bytes]) None #
Set multiple serialized values in the cache.
- Parameters:
mapping¶ – a dict in which the key will be whatever was passed to the
CacheRegion.set_multi()
method, processed by the “key mangling” function, if any.
When implementing a new
CacheBackend
or cutomizing viaProxyBackend
, be aware that when this method is invoked byRegion.get_or_create_multi()
, themapping
values are the same ones returned to the upstream caller. If the subclass alters the values in any way, it must not do so ‘in-place’ on themapping
dict – that will have the undesirable effect of modifying the returned values as well.Added in version 1.1.
The default implementation of this method for
CacheBackend
calls upon theCacheBackend.set_multi()
method.See also
- class dogpile.cache.api.CacheMutex#
Bases:
ABC
Describes a mutexing object with acquire and release methods.
This is an abstract base class; any object that has acquire/release methods may be used.
Added in version 1.1.
See also
CacheBackend.get_mutex()
- the backend method that optionally returns this locking object.- abstract acquire(wait: bool = True) bool #
Acquire the mutex.
- Parameters:
wait¶ – if True, block until available, else return True/False immediately.
- Returns:
True if the lock succeeded.
- abstract locked() bool #
Check if the mutex was acquired.
- Returns:
true if the lock is acquired.
Added in version 1.1.2.
- abstract release() None #
Release the mutex.
- dogpile.cache.api.CacheReturnType#
The non-serialized form of what may be returned from a backend get method.
alias of
CachedValue
|Literal
[NO_VALUE
]
- class dogpile.cache.api.CachedValue(payload: ValuePayload, metadata: MetaDataType)#
Bases:
NamedTuple
Represent a value stored in the cache.
CachedValue
is a two-tuple of(payload, metadata)
, wheremetadata
is dogpile.cache’s tracking information ( currently the creation time).- property age: float#
Returns the elapsed time in seconds as a float since the insertion of the value in the cache.
This value is computed dynamically by subtracting the cached floating point epoch value from the value of
time.time()
.Added in version 1.3.
- property cached_time: float#
The epoch (floating point time value) stored when this payload was cached.
Added in version 1.3.
- metadata: Mapping[str, Any]#
Metadata dictionary for the cached value.
Prefer using accessors such as
CachedValue.cached_time
rather than accessing this mapping directly.
- payload: Any#
the actual cached value.
- exception dogpile.cache.api.CantDeserializeException#
Bases:
Exception
Exception indicating deserialization failed, and that caching should proceed to re-generate a value
Added in version 1.2.0.
- dogpile.cache.api.KeyType#
A cache key.
- dogpile.cache.api.NO_VALUE = <dogpile.cache.api.NoValue object>#
Value returned from
CacheRegion.get()
that describes a key not present.
- class dogpile.cache.api.NoValue(value)#
Bases:
Enum
Describe a missing cache value.
The
NO_VALUE
constant should be used.
- dogpile.cache.api.SerializedReturnType#
the serialized form of what may be returned from a backend get method.
alias of
bytes
|Literal
[NO_VALUE
]
- dogpile.cache.api.ValuePayload = typing.Any#
An object to be placed in the cache against a key.
Backends#
Memory Backends#
Provides simple dictionary-based backends.
The two backends are MemoryBackend
and MemoryPickleBackend
;
the latter applies a serialization step to cached values while the former
places the value as given into the dictionary.
- class dogpile.cache.backends.memory.MemoryBackend(arguments)#
Bases:
CacheBackend
A backend that uses a plain dictionary.
There is no size management, and values which are placed into the dictionary will remain until explicitly removed. Note that Dogpile’s expiration of items is based on timestamps and does not remove them from the cache.
E.g.:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.memory' )
To use a Python dictionary of your choosing, it can be passed in with the
cache_dict
argument:my_dictionary = {} region = make_region().configure( 'dogpile.cache.memory', arguments={ "cache_dict":my_dictionary } )
- class dogpile.cache.backends.memory.MemoryPickleBackend(arguments)#
Bases:
DefaultSerialization
,MemoryBackend
A backend that uses a plain dictionary, but serializes objects on
MemoryBackend.set()
and deserializesMemoryBackend.get()
.E.g.:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.memory_pickle' )
The usage of pickle to serialize cached values allows an object as placed in the cache to be a copy of the original given object, so that any subsequent changes to the given object aren’t reflected in the cached value, thus making the backend behave the same way as other backends which make use of serialization.
The serialization is performed via pickle, and incurs the same performance hit in doing so as that of other backends; in this way the
MemoryPickleBackend
performance is somewhere in between that of the pureMemoryBackend
and the remote server oriented backends such as that of Memcached or Redis.Pickle behavior here is the same as that of the Redis backend, using either
cPickle
orpickle
and specifyingHIGHEST_PROTOCOL
upon serialize.Added in version 0.5.3.
Memcached Backends#
Provides backends for talking to memcached.
- class dogpile.cache.backends.memcached.BMemcachedBackend(arguments)#
Bases:
GenericMemcachedBackend
A backend for the python-binary-memcached memcached client.
This is a pure Python memcached client which includes security features like SASL and SSL/TLS.
SASL is a standard for adding authentication mechanisms to protocols in a way that is protocol independent.
A typical configuration using username/password:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.bmemcached', expiration_time = 3600, arguments = { 'url':["127.0.0.1"], 'username':'scott', 'password':'tiger' } )
A typical configuration using tls_context:
import ssl from dogpile.cache import make_region ctx = ssl.create_default_context(cafile="/path/to/my-ca.pem") region = make_region().configure( 'dogpile.cache.bmemcached', expiration_time = 3600, arguments = { 'url':["127.0.0.1"], 'tls_context':ctx, } )
For advanced ways to configure TLS creating a more complex tls_context visit https://docs.python.org/3/library/ssl.html
Arguments which can be passed to the
arguments
dictionary include:- Parameters:
- delete_multi(keys)#
python-binary-memcached api does not implements delete_multi
- class dogpile.cache.backends.memcached.GenericMemcachedBackend(arguments)#
Bases:
CacheBackend
Base class for memcached backends.
This base class accepts a number of paramters common to all backends.
- Parameters:
url¶ – the string URL to connect to. Can be a single string or a list of strings. This is the only argument that’s required.
distributed_lock¶ – boolean, when True, will use a memcached-lock as the dogpile lock (see
MemcachedLock
). Use this when multiple processes will be talking to the same memcached instance. When left at False, dogpile will coordinate on a regular threading mutex.lock_timeout¶ –
integer, number of seconds after acquiring a lock that memcached should expire it. This argument is only valid when
distributed_lock
isTrue
.Added in version 0.5.7.
The
GenericMemachedBackend
uses athreading.local()
object to store individual client objects per thread, as most modern memcached clients do not appear to be inherently threadsafe.In particular,
threading.local()
has the advantage over pylibmc’s built-in thread pool in that it automatically discards objects associated with a particular thread when that thread ends.- property client#
Return the memcached client.
This uses a threading.local by default as it appears most modern memcached libs aren’t inherently threadsafe.
- set_arguments: Mapping[str, Any] = {}#
Additional arguments which will be passed to the
set()
method.
- class dogpile.cache.backends.memcached.MemcachedBackend(arguments)#
Bases:
MemcacheArgs
,GenericMemcachedBackend
A backend using the standard Python-memcached library.
Example:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.memcached', expiration_time = 3600, arguments = { 'url':"127.0.0.1:11211" } )
- Parameters:
dead_retry¶ –
Number of seconds memcached server is considered dead before it is tried again. Will be passed to
memcache.Client
as thedead_retry
parameter.Changed in version 1.1.8: Moved the
dead_retry
argument which was erroneously added to “set_parameters” to be part of the Memcached connection arguments.socket_timeout¶ –
Timeout in seconds for every call to a server. Will be passed to
memcache.Client
as thesocket_timeout
parameter.Changed in version 1.1.8: Moved the
socket_timeout
argument which was erroneously added to “set_parameters” to be part of the Memcached connection arguments.
- class dogpile.cache.backends.memcached.MemcachedLock(client_fn, key, timeout=0)#
Bases:
object
Simple distributed lock using memcached.
- class dogpile.cache.backends.memcached.PyMemcacheBackend(arguments)#
Bases:
GenericMemcachedBackend
A backend for the pymemcache memcached client.
A comprehensive, fast, pure Python memcached client
Added in version 1.1.2.
pymemcache supports the following features:
Complete implementation of the memcached text protocol.
Configurable timeouts for socket connect and send/recv calls.
Access to the “noreply” flag, which can significantly increase the speed of writes.
Flexible, simple approach to serialization and deserialization.
The (optional) ability to treat network and memcached errors as cache misses.
dogpile.cache uses the
HashClient
from pymemcache in order to reduce API differences when compared to other memcached client drivers. This allows the user to provide a single server or a list of memcached servers.Arguments which can be passed to the
arguments
dictionary include:- Parameters:
tls_context¶ –
optional TLS context, will be used for TLS connections.
A typical configuration using tls_context:
import ssl from dogpile.cache import make_region ctx = ssl.create_default_context(cafile="/path/to/my-ca.pem") region = make_region().configure( 'dogpile.cache.pymemcache', expiration_time = 3600, arguments = { 'url':["127.0.0.1"], 'tls_context':ctx, } )
See also
https://docs.python.org/3/library/ssl.html - additional TLS documentation.
serde¶ – optional “serde”. Defaults to
pymemcache.serde.pickle_serde
.default_noreply¶ – defaults to False. When set to True this flag enables the pymemcache “noreply” feature. See the pymemcache documentation for further details.
socket_keepalive¶ –
optional socket keepalive, will be used for TCP keepalive configuration. Use of this parameter requires pymemcache 3.5.0 or greater. This parameter accepts a pymemcache.client.base.KeepAliveOpts object.
A typical configuration using
socket_keepalive
:from pymemcache import KeepaliveOpts from dogpile.cache import make_region # Using the default keepalive configuration socket_keepalive = KeepaliveOpts() region = make_region().configure( 'dogpile.cache.pymemcache', expiration_time = 3600, arguments = { 'url':["127.0.0.1"], 'socket_keepalive': socket_keepalive } )
Added in version 1.1.4: - added support for
socket_keepalive
.enable_retry_client¶ –
optional flag to enable retry client mechanisms to handle failure. Defaults to False. When set to
True
, thePyMemcacheBackend.retry_attempts
parameter must also be set, along with optional parametersPyMemcacheBackend.retry_delay
.PyMemcacheBackend.retry_for
,PyMemcacheBackend.do_not_retry_for
.See also
https://pymemcache.readthedocs.io/en/latest/getting_started.html#using-the-built-in-retrying-mechanism - in the pymemcache documentation
Added in version 1.1.4.
retry_attempts¶ –
how many times to attempt an action with pymemcache’s retrying wrapper before failing. Must be 1 or above. Defaults to None.
Added in version 1.1.4.
retry_delay¶ –
optional int|float, how many seconds to sleep between each attempt. Used by the retry wrapper. Defaults to None.
Added in version 1.1.4.
retry_for¶ –
optional None|tuple|set|list, what exceptions to allow retries for. Will allow retries for all exceptions if None. Example:
(MemcacheClientError, MemcacheUnexpectedCloseError)
Accepts any class that is a subclass of Exception. Defaults to None.Added in version 1.1.4.
do_not_retry_for¶ –
optional None|tuple|set|list, what exceptions should be retried. Will not block retries for any Exception if None. Example:
(IOError, MemcacheIllegalInputError)
Accepts any class that is a subclass of Exception. Defaults to None.Added in version 1.1.4.
hashclient_retry_attempts¶ –
Amount of times a client should be tried before it is marked dead and removed from the pool in the HashClient’s internal mechanisms.
Added in version 1.1.5.
hashclient_retry_timeout¶ –
Time in seconds that should pass between retry attempts in the HashClient’s internal mechanisms.
Added in version 1.1.5.
dead_timeout¶ –
Time in seconds before attempting to add a node back in the pool in the HashClient’s internal mechanisms.
Added in version 1.1.5.
memcached_expire_time¶ –
integer, when present will be passed as the
time
parameter to theset
method. This is used to set the memcached expiry time for a value.Note
This parameter is different from Dogpile’s own
expiration_time
, which is the number of seconds after which Dogpile will consider the value to be expired. When Dogpile considers a value to be expired, it continues to use the value until generation of a new value is complete, when usingCacheRegion.get_or_create()
. Therefore, if you are settingmemcached_expire_time
, you’ll want to make sure it is greater thanexpiration_time
by at least enough seconds for new values to be generated, else the value won’t be available during a regeneration, forcing all threads to wait for a regeneration each time a value expires.Added in version 1.3.3.
- class dogpile.cache.backends.memcached.PylibmcBackend(arguments)#
Bases:
MemcacheArgs
,GenericMemcachedBackend
A backend for the pylibmc memcached client.
A configuration illustrating several of the optional arguments described in the pylibmc documentation:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.pylibmc', expiration_time = 3600, arguments = { 'url':["127.0.0.1"], 'binary':True, 'behaviors':{"tcp_nodelay": True,"ketama":True} } )
Arguments accepted here include those of
GenericMemcachedBackend
, as well as those below.
Redis Backends#
Provides backends for talking to Redis.
- class dogpile.cache.backends.redis.RedisBackend(arguments)#
Bases:
BytesBackend
A Redis backend, using the redis-py driver.
Example configuration:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.redis', arguments = { 'host': 'localhost', 'port': 6379, 'db': 0, 'redis_expiration_time': 60*60*2, # 2 hours 'distributed_lock': True, 'thread_local_lock': False } )
Arguments accepted in the arguments dictionary:
- Parameters:
url¶ – string. If provided, will override separate host/username/password/port/db params. The format is that accepted by
StrictRedis.from_url()
.host¶ – string, default is
localhost
.username¶ –
string, default is no username.
Added in version 1.3.1.
password¶ – string, default is no password.
port¶ – integer, default is
6379
.db¶ – integer, default is
0
.redis_expiration_time¶ – integer, number of seconds after setting a value that Redis should expire it. This should be larger than dogpile’s cache expiration. By default no expiration is set.
distributed_lock¶ – boolean, when True, will use a redis-lock as the dogpile lock. Use this when multiple processes will be talking to the same redis instance. When left at False, dogpile will coordinate on a regular threading mutex.
lock_timeout¶ – integer, number of seconds after acquiring a lock that Redis should expire it. This argument is only valid when
distributed_lock
isTrue
.socket_timeout¶ – float, seconds for socket timeout. Default is None (no timeout).
socket_connect_timeout¶ –
float, seconds for socket connection timeout. Default is None (no timeout).
Added in version 1.3.2.
socket_keepalive¶ –
boolean, when True, socket keepalive is enabled. Default is False.
Added in version 1.3.2.
socket_keepalive_options¶ –
dict, socket keepalive options. Default is None (no options).
Added in version 1.3.2.
lock_sleep¶ – integer, number of seconds to sleep when failed to acquire a lock. This argument is only valid when
distributed_lock
isTrue
.connection_pool¶ –
redis.ConnectionPool
object. If provided, this object supersedes other connection arguments passed to theredis.StrictRedis
instance, including url and/or host as well as socket_timeout, and will be passed toredis.StrictRedis
as the source of connectivity.thread_local_lock¶ – bool, whether a thread-local Redis lock object should be used. This is the default, but is not compatible with asynchronous runners, as they run in a different thread than the one used to create the lock.
connection_kwargs¶ –
dict, additional keyword arguments are passed along to the
StrictRedis.from_url()
method orStrictRedis()
constructor directly, including parameters likessl
,ssl_certfile
,charset
, etc.Added in version 1.1.6.
- class dogpile.cache.backends.redis.RedisClusterBackend(arguments)#
Bases:
RedisBackend
A Redis backend, using the redis-py driver. This backend is to be used when connecting to a Redis Cluster which will use the RedisCluster Client.
See also
Clustering in the redis-py documentation.
Requires redis-py version >=4.1.0.
Added in version 1.3.2.
Connecting to the cluster requires one of:
Passing a list of startup nodes
Passing only one node of the cluster, Redis will use automatic discovery to find the other nodes.
Example configuration, using startup nodes:
from dogpile.cache import make_region from redis.cluster import ClusterNode region = make_region().configure( 'dogpile.cache.redis_cluster', arguments = { "startup_nodes": [ ClusterNode('localhost', 6379), ClusterNode('localhost', 6378) ] } )
It is recommended to use startup nodes, so that connections will be successful as at least one node will always be present. Connection arguments such as password, username or CA certificate may be passed using
connection_kwargs
:from dogpile.cache import make_region from redis.cluster import ClusterNode connection_kwargs = { "username": "admin", "password": "averystrongpassword", "ssl": True, "ssl_ca_certs": "redis.pem", } nodes = [ ClusterNode("localhost", 6379), ClusterNode("localhost", 6380), ClusterNode("localhost", 6381), ] region = make_region().configure( "dogpile.cache.redis_cluster", arguments={ "startup_nodes": nodes, "connection_kwargs": connection_kwargs, }, )
Passing a URL to one node only will allow the driver to discover the whole cluster automatically:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.redis_cluster', arguments = { "url": "localhost:6379/0" } )
A caveat of the above approach is that if the single node targeting is not available, this would prevent the connection from being successful.
Parameters accepted include:
- Parameters:
startup_nodes¶ – List of ClusterNode. The list of nodes in the cluster that the client will try to connect to.
url¶ – string. If provided, will override separate host/password/port/db params. The format is that accepted by
RedisCluster.from_url()
.db¶ – integer, default is
0
.redis_expiration_time¶ – integer, number of seconds after setting a value that Redis should expire it. This should be larger than dogpile’s cache expiration. By default no expiration is set.
distributed_lock¶ – boolean, when True, will use a redis-lock as the dogpile lock. Use this when multiple processes will be talking to the same redis instance. When left at False, dogpile will coordinate on a regular threading mutex.
lock_timeout¶ – integer, number of seconds after acquiring a lock that Redis should expire it. This argument is only valid when
distributed_lock
isTrue
.socket_timeout¶ – float, seconds for socket timeout. Default is None (no timeout).
socket_connect_timeout¶ – float, seconds for socket connection timeout. Default is None (no timeout).
socket_keepalive¶ – boolean, when True, socket keepalive is enabled Default is False.
lock_sleep¶ – integer, number of seconds to sleep when failed to acquire a lock. This argument is only valid when
distributed_lock
isTrue
.thread_local_lock¶ – bool, whether a thread-local Redis lock object should be used. This is the default, but is not compatible with asynchronous runners, as they run in a different thread than the one used to create the lock.
connection_kwargs¶ – dict, additional keyword arguments are passed along to the
RedisCluster.from_url()
method orRedisCluster()
constructor directly, including parameters likessl
,ssl_certfile
,charset
, etc.
- class dogpile.cache.backends.redis.RedisSentinelBackend(arguments)#
Bases:
RedisBackend
A Redis backend, using the redis-py driver. This backend is to be used when using Redis Sentinel.
Added in version 1.0.0.
Example configuration:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.redis_sentinel', arguments = { 'sentinels': [ ['redis_sentinel_1', 26379], ['redis_sentinel_2', 26379] ], 'db': 0, 'redis_expiration_time': 60*60*2, # 2 hours 'distributed_lock': True, 'thread_local_lock': False } )
Arguments accepted in the arguments dictionary:
- Parameters:
username¶ –
string, default is no username.
Added in version 1.3.1.
password¶ – string, default is no password.
db¶ – integer, default is
0
.redis_expiration_time¶ – integer, number of seconds after setting a value that Redis should expire it. This should be larger than dogpile’s cache expiration. By default no expiration is set.
distributed_lock¶ – boolean, when True, will use a redis-lock as the dogpile lock. Use this when multiple processes will be talking to the same redis instance. When False, dogpile will coordinate on a regular threading mutex, Default is True.
lock_timeout¶ – integer, number of seconds after acquiring a lock that Redis should expire it. This argument is only valid when
distributed_lock
isTrue
.socket_timeout¶ –
float, seconds for socket timeout. Default is None (no timeout).
Added in version 1.3.2.
socket_connect_timeout¶ –
float, seconds for socket connection timeout. Default is None (no timeout).
Added in version 1.3.2.
socket_keepalive¶ –
boolean, when True, socket keepalive is enabled Default is False.
Added in version 1.3.2.
socket_keepalive_options¶ – dict, socket keepalive options. Default is {} (no options).
sentinels¶ – is a list of sentinel nodes. Each node is represented by a pair (hostname, port). Default is None (not in sentinel mode).
service_name¶ – str, the service name. Default is ‘mymaster’.
sentinel_kwargs¶ – is a dictionary of connection arguments used when connecting to sentinel instances. Any argument that can be passed to a normal Redis connection can be specified here. Default is {}.
connection_kwargs¶ – dict, additional keyword arguments are passed along to the
StrictRedis.from_url()
method orStrictRedis()
constructor directly, including parameters likessl
,ssl_certfile
,charset
, etc.lock_sleep¶ – integer, number of seconds to sleep when failed to acquire a lock. This argument is only valid when
distributed_lock
isTrue
.thread_local_lock¶ – bool, whether a thread-local Redis lock object should be used. This is the default, but is not compatible with asynchronous runners, as they run in a different thread than the one used to create the lock.
Valkey Backends#
Provides backends for talking to Valkey.
- class dogpile.cache.backends.valkey.ValkeyBackend(arguments)#
Bases:
BytesBackend
A Valkey backend, using the valkey-py driver.
Added in version 1.3.4.
Example configuration:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.valkey', arguments = { 'host': 'localhost', 'port': 6379, 'db': 0, 'valkey_expiration_time': 60*60*2, # 2 hours 'distributed_lock': True, 'thread_local_lock': False } )
Arguments accepted in the arguments dictionary:
- Parameters:
url¶ – string. If provided, will override separate host/username/password/port/db params. The format is that accepted by
StrictValkey.from_url()
.host¶ – string, default is
localhost
.username¶ – string, default is no username.
password¶ – string, default is no password.
port¶ – integer, default is
6379
.db¶ – integer, default is
0
.valkey_expiration_time¶ – integer, number of seconds after setting a value that Valkey should expire it. This should be larger than dogpile’s cache expiration. By default no expiration is set.
distributed_lock¶ – boolean, when True, will use a valkey-lock as the dogpile lock. Use this when multiple processes will be talking to the same valkey instance. When left at False, dogpile will coordinate on a regular threading mutex.
lock_timeout¶ – integer, number of seconds after acquiring a lock that Valkey should expire it. This argument is only valid when
distributed_lock
isTrue
.socket_timeout¶ – float, seconds for socket timeout. Default is None (no timeout).
socket_connect_timeout¶ – float, seconds for socket connection timeout. Default is None (no timeout).
socket_keepalive¶ – boolean, when True, socket keepalive is enabled. Default is False.
socket_keepalive_options¶ – dict, socket keepalive options. Default is None (no options).
lock_sleep¶ – integer, number of seconds to sleep when failed to acquire a lock. This argument is only valid when
distributed_lock
isTrue
.connection_pool¶ –
valkey.ConnectionPool
object. If provided, this object supersedes other connection arguments passed to thevalkey.StrictValkey
instance, including url and/or host as well as socket_timeout, and will be passed tovalkey.StrictValkey
as the source of connectivity.thread_local_lock¶ – bool, whether a thread-local Valkey lock object should be used. This is the default, but is not compatible with asynchronous runners, as they run in a different thread than the one used to create the lock.
connection_kwargs¶ – dict, additional keyword arguments are passed along to the
StrictValkey.from_url()
method orStrictValkey()
constructor directly, including parameters likessl
,ssl_certfile
,charset
, etc.
- class dogpile.cache.backends.valkey.ValkeyClusterBackend(arguments)#
Bases:
ValkeyBackend
A Valkey backend, using the valkey-py driver. This backend is to be used when connecting to a Valkey Cluster which will use the ValkeyCluster Client.
See also
Clustering in the valkey-py documentation.
Requires valkey-py version >=4.1.0.
Added in version 1.3.2.
Connecting to the cluster requires one of:
Passing a list of startup nodes
Passing only one node of the cluster, Valkey will use automatic discovery to find the other nodes.
Example configuration, using startup nodes:
from dogpile.cache import make_region from valkey.cluster import ClusterNode region = make_region().configure( 'dogpile.cache.valkey_cluster', arguments = { "startup_nodes": [ ClusterNode('localhost', 6379), ClusterNode('localhost', 6378) ] } )
It is recommended to use startup nodes, so that connections will be successful as at least one node will always be present. Connection arguments such as password, username or CA certificate may be passed using
connection_kwargs
:from dogpile.cache import make_region from valkey.cluster import ClusterNode connection_kwargs = { "username": "admin", "password": "averystrongpassword", "ssl": True, "ssl_ca_certs": "valkey.pem", } nodes = [ ClusterNode("localhost", 6379), ClusterNode("localhost", 6380), ClusterNode("localhost", 6381), ] region = make_region().configure( "dogpile.cache.valkey_cluster", arguments={ "startup_nodes": nodes, "connection_kwargs": connection_kwargs, }, )
Passing a URL to one node only will allow the driver to discover the whole cluster automatically:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.valkey_cluster', arguments = { "url": "localhost:6379/0" } )
A caveat of the above approach is that if the single node targeting is not available, this would prevent the connection from being successful.
Parameters accepted include:
- Parameters:
startup_nodes¶ – List of ClusterNode. The list of nodes in the cluster that the client will try to connect to.
url¶ – string. If provided, will override separate host/password/port/db params. The format is that accepted by
ValkeyCluster.from_url()
.db¶ – integer, default is
0
.valkey_expiration_time¶ – integer, number of seconds after setting a value that Valkey should expire it. This should be larger than dogpile’s cache expiration. By default no expiration is set.
distributed_lock¶ – boolean, when True, will use a valkey-lock as the dogpile lock. Use this when multiple processes will be talking to the same valkey instance. When left at False, dogpile will coordinate on a regular threading mutex.
lock_timeout¶ – integer, number of seconds after acquiring a lock that Valkey should expire it. This argument is only valid when
distributed_lock
isTrue
.socket_timeout¶ – float, seconds for socket timeout. Default is None (no timeout).
socket_connect_timeout¶ – float, seconds for socket connection timeout. Default is None (no timeout).
socket_keepalive¶ – boolean, when True, socket keepalive is enabled Default is False.
lock_sleep¶ – integer, number of seconds to sleep when failed to acquire a lock. This argument is only valid when
distributed_lock
isTrue
.thread_local_lock¶ – bool, whether a thread-local Valkey lock object should be used. This is the default, but is not compatible with asynchronous runners, as they run in a different thread than the one used to create the lock.
connection_kwargs¶ – dict, additional keyword arguments are passed along to the
ValkeyCluster.from_url()
method orValkeyCluster()
constructor directly, including parameters likessl
,ssl_certfile
,charset
, etc.
- class dogpile.cache.backends.valkey.ValkeySentinelBackend(arguments)#
Bases:
ValkeyBackend
A Valkey backend, using the valkey-py driver. This backend is to be used when using Valkey Sentinel.
Added in version 1.0.0.
Example configuration:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.valkey_sentinel', arguments = { 'sentinels': [ ['valkey_sentinel_1', 26379], ['valkey_sentinel_2', 26379] ], 'db': 0, 'valkey_expiration_time': 60*60*2, # 2 hours 'distributed_lock': True, 'thread_local_lock': False } )
Arguments accepted in the arguments dictionary:
- Parameters:
username¶ –
string, default is no username.
Added in version 1.3.1.
password¶ – string, default is no password.
db¶ – integer, default is
0
.valkey_expiration_time¶ – integer, number of seconds after setting a value that Valkey should expire it. This should be larger than dogpile’s cache expiration. By default no expiration is set.
distributed_lock¶ – boolean, when True, will use a valkey-lock as the dogpile lock. Use this when multiple processes will be talking to the same valkey instance. When False, dogpile will coordinate on a regular threading mutex, Default is True.
lock_timeout¶ – integer, number of seconds after acquiring a lock that Valkey should expire it. This argument is only valid when
distributed_lock
isTrue
.socket_timeout¶ –
float, seconds for socket timeout. Default is None (no timeout).
Added in version 1.3.2.
socket_connect_timeout¶ –
float, seconds for socket connection timeout. Default is None (no timeout).
Added in version 1.3.2.
socket_keepalive¶ –
boolean, when True, socket keepalive is enabled Default is False.
Added in version 1.3.2.
socket_keepalive_options¶ – dict, socket keepalive options. Default is {} (no options).
sentinels¶ – is a list of sentinel nodes. Each node is represented by a pair (hostname, port). Default is None (not in sentinel mode).
service_name¶ – str, the service name. Default is ‘mymaster’.
sentinel_kwargs¶ – is a dictionary of connection arguments used when connecting to sentinel instances. Any argument that can be passed to a normal Valkey connection can be specified here. Default is {}.
connection_kwargs¶ – dict, additional keyword arguments are passed along to the
StrictValkey.from_url()
method orStrictValkey()
constructor directly, including parameters likessl
,ssl_certfile
,charset
, etc.lock_sleep¶ – integer, number of seconds to sleep when failed to acquire a lock. This argument is only valid when
distributed_lock
isTrue
.thread_local_lock¶ – bool, whether a thread-local Valkey lock object should be used. This is the default, but is not compatible with asynchronous runners, as they run in a different thread than the one used to create the lock.
File Backends#
Provides backends that deal with local filesystem access.
- class dogpile.cache.backends.file.AbstractFileLock(filename)#
Bases:
object
Coordinate read/write access to a file.
typically is a file-based lock but doesn’t necessarily have to be.
The default implementation here is
FileLock
.Implementations should provide the following methods:
* __init__() * acquire_read_lock() * acquire_write_lock() * release_read_lock() * release_write_lock()
The
__init__()
method accepts a single argument “filename”, which may be used as the “lock file”, for those implementations that use a lock file.Note that multithreaded environments must provide a thread-safe version of this lock. The recommended approach for file- descriptor-based locks is to use a Python
threading.local()
so that a unique file descriptor is held per thread. See the source code ofFileLock
for an implementation example.- acquire(wait=True)#
Acquire the “write” lock.
This is a direct call to
AbstractFileLock.acquire_write_lock()
.
- acquire_read_lock(wait)#
Acquire a ‘reader’ lock.
Raises
NotImplementedError
by default, must be implemented by subclasses.
- acquire_write_lock(wait)#
Acquire a ‘write’ lock.
Raises
NotImplementedError
by default, must be implemented by subclasses.
- property is_open#
optional method.
- read()#
Provide a context manager for the “read” lock.
This method makes use of
AbstractFileLock.acquire_read_lock()
andAbstractFileLock.release_read_lock()
- release()#
Release the “write” lock.
This is a direct call to
AbstractFileLock.release_write_lock()
.
- release_read_lock()#
Release a ‘reader’ lock.
Raises
NotImplementedError
by default, must be implemented by subclasses.
- release_write_lock()#
Release a ‘writer’ lock.
Raises
NotImplementedError
by default, must be implemented by subclasses.
- write()#
Provide a context manager for the “write” lock.
This method makes use of
AbstractFileLock.acquire_write_lock()
andAbstractFileLock.release_write_lock()
- class dogpile.cache.backends.file.DBMBackend(arguments)#
Bases:
BytesBackend
A file-backend using a dbm file to store keys.
Basic usage:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.dbm', expiration_time = 3600, arguments = { "filename":"/path/to/cachefile.dbm" } )
DBM access is provided using the Python
anydbm
module, which selects a platform-specific dbm module to use. This may be made to be more configurable in a future release.Note that different dbm modules have different behaviors. Some dbm implementations handle their own locking, while others don’t. The
DBMBackend
uses a read/write lockfile by default, which is compatible even with those DBM implementations for which this is unnecessary, though the behavior can be disabled.The DBM backend by default makes use of two lockfiles. One is in order to protect the DBM file itself from concurrent writes, the other is to coordinate value creation (i.e. the dogpile lock). By default, these lockfiles use the
flock()
system call for locking; this is only available on Unix platforms. An alternative lock implementation, such as one which is based on threads or uses a third-party system such as portalocker, can be dropped in using thelock_factory
argument in conjunction with theAbstractFileLock
base class.Currently, the dogpile lock is against the entire DBM file, not per key. This means there can only be one “creator” job running at a time per dbm file.
A future improvement might be to have the dogpile lock using a filename that’s based on a modulus of the key. Locking on a filename that uniquely corresponds to the key is problematic, since it’s not generally safe to delete lockfiles as the application runs, implying an unlimited number of key-based files would need to be created and never deleted.
Parameters to the
arguments
dictionary are below.- Parameters:
filename¶ – path of the filename in which to create the DBM file. Note that some dbm backends will change this name to have additional suffixes.
rw_lockfile¶ – the name of the file to use for read/write locking. If omitted, a default name is used by appending the suffix “.rw.lock” to the DBM filename. If False, then no lock is used.
dogpile_lockfile¶ – the name of the file to use for value creation, i.e. the dogpile lock. If omitted, a default name is used by appending the suffix “.dogpile.lock” to the DBM filename. If False, then dogpile.cache uses the default dogpile lock, a plain thread-based mutex.
lock_factory¶ –
a function or class which provides for a read/write lock. Defaults to
FileLock
. Custom implementations need to implement context-manager basedread()
andwrite()
functions - theAbstractFileLock
class is provided as a base class which provides these methods based on individual read/write lock functions. E.g. to replace the lock with the dogpile.coreReadWriteMutex
:from dogpile.core.readwrite_lock import ReadWriteMutex from dogpile.cache.backends.file import AbstractFileLock class MutexLock(AbstractFileLock): def __init__(self, filename): self.mutex = ReadWriteMutex() def acquire_read_lock(self, wait): ret = self.mutex.acquire_read_lock(wait) return wait or ret def acquire_write_lock(self, wait): ret = self.mutex.acquire_write_lock(wait) return wait or ret def release_read_lock(self): return self.mutex.release_read_lock() def release_write_lock(self): return self.mutex.release_write_lock() from dogpile.cache import make_region region = make_region().configure( "dogpile.cache.dbm", expiration_time=300, arguments={ "filename": "file.dbm", "lock_factory": MutexLock } )
While the included
FileLock
usesos.flock()
, a windows-compatible implementation can be built using a library such as portalocker.Added in version 0.5.2.
- class dogpile.cache.backends.file.FileLock(filename)#
Bases:
AbstractFileLock
Use lockfiles to coordinate read/write access to a file.
Only works on Unix systems, using fcntl.flock().
Proxy Backends#
Provides a utility and a decorator class that allow for modifying the behavior of different backends without altering the class itself or having to extend the base backend.
Added in version 0.5.0: Added support for the ProxyBackend
class.
- class dogpile.cache.proxy.ProxyBackend(*arg, **kw)#
Bases:
CacheBackend
A decorator class for altering the functionality of backends.
Basic usage:
from dogpile.cache import make_region from dogpile.cache.proxy import ProxyBackend class MyFirstProxy(ProxyBackend): def get_serialized(self, key): # ... custom code goes here ... return self.proxied.get_serialized(key) def get(self, key): # ... custom code goes here ... return self.proxied.get(key) def set(self, key, value): # ... custom code goes here ... self.proxied.set(key) class MySecondProxy(ProxyBackend): def get_serialized(self, key): # ... custom code goes here ... return self.proxied.get_serialized(key) def get(self, key): # ... custom code goes here ... return self.proxied.get(key) region = make_region().configure( 'dogpile.cache.dbm', expiration_time = 3600, arguments = { "filename":"/path/to/cachefile.dbm" }, wrap = [ MyFirstProxy, MySecondProxy ] )
Classes that extend
ProxyBackend
can be stacked together. The.proxied
property will always point to either the concrete backend instance or the next proxy in the chain that a method can be delegated towards.Added in version 0.5.0.
- wrap(backend: CacheBackend) Self #
Take a backend as an argument and setup the self.proxied property. Return an object that be used as a backend by a
CacheRegion
object.
Null Backend#
The Null backend does not do any caching at all. It can be used to test behavior without caching, or as a means of disabling caching for a region that is otherwise used normally.
Added in version 0.5.4.
- class dogpile.cache.backends.null.NullBackend(arguments)#
Bases:
CacheBackend
A “null” backend that effectively disables all cache operations.
Basic usage:
from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.null' )
Exceptions#
Exception classes for dogpile.cache.
- exception dogpile.cache.exception.DogpileCacheException#
Bases:
Exception
Base Exception for dogpile.cache exceptions to inherit from.
- exception dogpile.cache.exception.PluginNotFound#
Bases:
DogpileCacheException
The specified plugin could not be found.
Added in version 0.6.4.
- exception dogpile.cache.exception.RegionAlreadyConfigured#
Bases:
DogpileCacheException
CacheRegion instance is already configured.
- exception dogpile.cache.exception.RegionNotConfigured#
Bases:
DogpileCacheException
CacheRegion instance has not been configured.
- exception dogpile.cache.exception.ValidationError#
Bases:
DogpileCacheException
Error validating a value or option.
Plugins#
Mako Integration#
dogpile.cache includes a Mako plugin that replaces Beaker as the cache backend. Setup a Mako template lookup using the “dogpile.cache” cache implementation and a region dictionary:
from dogpile.cache import make_region
from mako.lookup import TemplateLookup
my_regions = {
"local":make_region().configure(
"dogpile.cache.dbm",
expiration_time=360,
arguments={"filename":"file.dbm"}
),
"memcached":make_region().configure(
"dogpile.cache.pylibmc",
expiration_time=3600,
arguments={"url":["127.0.0.1"]}
)
}
mako_lookup = TemplateLookup(
directories=["/myapp/templates"],
cache_impl="dogpile.cache",
cache_args={
'regions':my_regions
}
)
To use the above configuration in a template, use the cached=True
argument on any Mako tag which accepts it, in conjunction with the
name of the desired region as the cache_region
argument:
<%def name="mysection()" cached="True" cache_region="memcached">
some content that's cached
</%def>
- class dogpile.cache.plugins.mako_cache.MakoPlugin(cache)#
Bases:
CacheImpl
A Mako
CacheImpl
which talks to dogpile.cache.
Utilities#
- dogpile.cache.util.function_key_generator(namespace, fn, to_str=<class 'str'>)#
Return a function that generates a string key, based on a given function as well as arguments to the returned function itself.
This is used by
CacheRegion.cache_on_arguments()
to generate a cache key from a decorated function.An alternate function may be used by specifying the
CacheRegion.function_key_generator
argument forCacheRegion
.See also
kwarg_function_key_generator()
- similar function that also takes keyword arguments into account
- dogpile.cache.util.kwarg_function_key_generator(namespace, fn, to_str=<class 'str'>)#
Return a function that generates a string key, based on a given function as well as arguments to the returned function itself.
For kwargs passed in, we will build a dict of all argname (key) argvalue (values) including default args from the argspec and then alphabetize the list before generating the key.
Added in version 0.6.2.
See also
function_key_generator()
- default key generation function
- dogpile.cache.util.sha1_mangle_key(key)#
a SHA1 key mangler.
- dogpile.cache.util.length_conditional_mangler(length, mangler)#
a key mangler that mangles if the length of the key is past a certain threshold.
dogpile Core#
- class dogpile.Lock(mutex, creator, value_and_created_fn, expiretime, async_creator=None)#
Dogpile lock class.
Provides an interface around an arbitrary mutex that allows one thread/process to be elected as the creator of a new value, while other threads/processes continue to return the previous version of that value.
- Parameters:
mutex¶ – A mutex object that provides
acquire()
andrelease()
methods.creator¶ – Callable which returns a tuple of the form (new_value, creation_time). “new_value” should be a newly generated value representing completed state. “creation_time” should be a floating point time value which is relative to Python’s
time.time()
call, representing the time at which the value was created. This time value should be associated with the created value.value_and_created_fn¶ – Callable which returns a tuple of the form (existing_value, creation_time). This basically should return what the last local call to the
creator()
callable has returned, i.e. the value and the creation time, which would be assumed here to be from a cache. If the value is not available, theNeedRegenerationException
exception should be thrown.expiretime¶ – Expiration time in seconds. Set to
None
for never expires. This timestamp is compared to the creation_time result andtime.time()
to determine if the value returned by value_and_created_fn is “expired”.async_creator¶ – A callable. If specified, this callable will be passed the mutex as an argument and is responsible for releasing the mutex after it finishes some asynchronous value creation. The intent is for this to be used to defer invocation of the creator callable until some later time.
- class dogpile.NeedRegenerationException#
An exception that when raised in the ‘with’ block, forces the ‘has_value’ flag to False and incurs a regeneration of the value.
- class dogpile.util.ReadWriteMutex#
A mutex which allows multiple readers, single writer.
ReadWriteMutex
uses a Pythonthreading.Condition
to provide this functionality across threads within a process.The Beaker package also contained a file-lock based version of this concept, so that readers/writers could be synchronized across processes with a common filesystem. A future Dogpile release may include this additional class at some point.
- acquire_read_lock(wait=True)#
Acquire the ‘read’ lock.
- acquire_write_lock(wait=True)#
Acquire the ‘write’ lock.
- release_read_lock()#
Release the ‘read’ lock.
- release_write_lock()#
Release the ‘write’ lock.
- class dogpile.util.NameRegistry(creator: Callable[[...], Any])#
Generates and return an object, keeping it as a singleton for a certain identifier for as long as its strongly referenced.
e.g.:
class MyFoo(object): "some important object." def __init__(self, identifier): self.identifier = identifier registry = NameRegistry(MyFoo) # thread 1: my_foo = registry.get("foo1") # thread 2 my_foo = registry.get("foo1")
Above,
my_foo
in both thread #1 and #2 will be the same object. The constructor forMyFoo
will be called once, passing the identifierfoo1
as the argument.When thread 1 and thread 2 both complete or otherwise delete references to
my_foo
, the object is removed from theNameRegistry
as a result of Python garbage collection.- Parameters:
creator¶ – A function that will create a new value, given the identifier passed to the
NameRegistry.get()
method.
- get(identifier: str, *args: Any, **kw: Any) Any #
Get and possibly create the value.