APPSERV-12 Common and dynamic trace store size via replicated maps #4471
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Background
Both historic and non-historic request tracing store can be local or shared in the cluster depending on the instance and cluster configuration or setup. In case instances share a cluster store it becomes unclear which size limit is effective for this common store. As each instance would apply its own limit to the store when adding traces the instance with the lowest setting would clear the store to that size each time an entry is added by that instance. This semantic is unexpected and confusing as a single instance with a very low setting has the potential to "ruin" the contents of the store for all other instances.
Summary
This task seeks to improve the situation described in background section. Instead of the smallest local setting dictating the effective size (when it actually does add traces) the maximum size of all instances with enabled request tracing should be applied in all circumstances (without being dependent on whether or not traces are added by enabled instances).
As each instance (except the DAS) only knows its own configuration instances cannot rely on
domain.xml
based information to calculate such a common maximum value. In clusters of Payara Micro instances each would also consider itself in the role of the DAS which makes a single central value problematic. This means have a truly cluster wide identical configuration on basis ofdomain.xml
based configuration is not possible.To introduce configuration values that are truly shared among all instances in the cluster a new service
ClusteredConfig
was added. It uses hazelcast'sReplicatedMap
to hold the local values of each instance that shared its value. The instances are responsible to actively share and un-share (clear) their local value for a shared property depending on the runtime state and the semantics attached to that shared property. Local values of stopped instances will automatically be cleared.In case of the request store size the logic is to share the size if a clustered store is used and the request tracing is active and to un-share if it is disabled.
The second semantic change is to make the store size dynamic. Instead of setting a fixed
int
value aIntSupplier
is set which provides the correct size for the moment as it can change for each time a trace is added without the general configuration of that instance undergoing a change.In case of a local store the size always reflects the size set in the
RequestTracingExecutionOptions
while in the clustered store case it reflects the current maximum of all local sizes of those instances where request tracing is enabled.Testing
Unit tests were adopted to dynamic size.
Clustered size was tested manually following the below steps.
Testing larger size of another instance takes precedence
LongestTraceStorageStrategy#getTraceForRemoval
(the method called to enforce the effective size passed asmaxSize
) but disable all breakpoints for nowmaxSize
is still the value of the DASmaxSize
should be higher value of the other instance.Testing not active configurations are no longer relevant
maxSize
should be back to DAS valuemaxSize
should be back to DAS valueTesting "No Cluster" config does not cause problems
java -jar ./appserver/extras/payara-micro/payara-micro-distribution/target/payara-micro.jar --nocluster hello-world.war