You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The L2 cache is currently enabled per default. This means that all objects written to the datastore will be cached in-memory:
Objects are placed in the L2 cache when you commit() the transaction of a PersistenceManager. This means that you only have datastore-persisted objects in that cache. Also, if an object is deleted during a transaction then at commit it will be removed from the L2 cache if it is present.
While this behavior works fine for normal CRUD-type applications, it does not for applications like Dependency-Track, that regularly iterate over multiple hundreds of thousands (or millions) of records, and modify them. With record sets in such high numbers, the cost-to-benefit ratio is simply not good.
For applications like Dependency-Track, global usage of the L2 cache is skyrocketing RAM usage, and puts unnecessary pressure on the GC.
Additionally, the L2 cache is only effective when records are fetched by their primary key. In modern applications, lookups by primary key are very rare, as most of them use secondary IDs intended for external consumption (e.g. UUIDs).
In order to support horizontal scalability, high availability, and a generally reduced resource footprint, it should be possible to globally disable the DataNucleus L2 cache.
As a nice-to-have, additional configuration options may be introduced to allow for usage of distributed caches like Redis or Hazelcast (which is supported by DN). Users wishing to utilize these integrations may do so.
The text was updated successfully, but these errors were encountered:
This change is mostly meant to address stevespringett#493, but may be useful for other use cases as well.
Due to the sheer amount of configuration options in DataNucleus (and potentially other frameworks), it is not practical to add `AlpineKey`s for all of them.
With this change, it is possible to disable the DataNucleus L2 cache by either:
* Setting the property `alpine.datanucleus.cache.level2.type=none` in `application.properties`, or
* Setting the environment variable `ALPINE_DATANUCLEUS_CACHE_LEVEL2_TYPE=none`
Signed-off-by: nscuro <[email protected]>
As identified in DependencyTrack/dependency-track#218 and DependencyTrack/dependency-track#903, the DataNucleus L2 cache is one of the areas that can prevent Alpine-based applications from being horizontally scalable or HA.
The L2 cache is currently enabled per default. This means that all objects written to the datastore will be cached in-memory:
While this behavior works fine for normal CRUD-type applications, it does not for applications like Dependency-Track, that regularly iterate over multiple hundreds of thousands (or millions) of records, and modify them. With record sets in such high numbers, the cost-to-benefit ratio is simply not good.
For applications like Dependency-Track, global usage of the L2 cache is skyrocketing RAM usage, and puts unnecessary pressure on the GC.
Additionally, the L2 cache is only effective when records are fetched by their primary key. In modern applications, lookups by primary key are very rare, as most of them use secondary IDs intended for external consumption (e.g. UUIDs).
In order to support horizontal scalability, high availability, and a generally reduced resource footprint, it should be possible to globally disable the DataNucleus L2 cache.
As a nice-to-have, additional configuration options may be introduced to allow for usage of distributed caches like Redis or Hazelcast (which is supported by DN). Users wishing to utilize these integrations may do so.
The text was updated successfully, but these errors were encountered: