You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have found that the only line in likelihood.py where less_than_zero is used, can be replaced by using random_near_zero instead. This addressed a problem we experienced, where the sampler got stuck in an infinite loop. This is a single line change but we'd like to make a serious attempt to verify first:
Why did this problem appear only in recent runs?
Why was this implemented in this way to begin with?
Does removing this affect the efficiency of the sampling algorithm or otherwise affect intended behaviour?
Are posteriors unaffected? Should this change be noted for old or new published results?
We aim to address these questions by consulting the literature of the multinest and x-psi algorithms, their implementations, and doing some comparison runs.
The text was updated successfully, but these errors were encountered:
I am unsure, it seems like this problem was always lurking, waiting to happen.
Why did this happen to begin with? Are ellipsoids drawn fully outside multidimensional prior boundary? Why?
MultiNest is not aware of multidimensional prior boundaries, so there is no reason why it wouldn’t draw those ellipsoids. A more complete answer would go in depth about the algorithm of how ellipsoids are drawn.
How is the multinest algorithm affected by this workaround? Any unintended behaviour? Any algorithmic efficiency loss?
Instead of outright rejection, points will now be weakly rejected when their loglikelihoods are still lower than the likelihoods of the current livepoints.
The bulk of the work is in the evaluation of the loglikelihoods, so rejection after loglikelihood comparison amounts to negligible extra work.
Are previous and new scientific results safeguarded? Is the old point “rejection” method and current fix allowed under Bayesian statistics? E.g. between new and old, is sampling biased or are posteriors affected?
The exploration of parameter space is only different at the stage where some livepoints are random_near_llzero, the posterior probability distributions should not be significantly affected by this. Further along, points will be rejected.
Are posteriors unaffected? Should this change be noted for old or new published results?
We find that posteriors are unaffected in multiple cross-tests (my own accretion disk model and Tuomo's polarization runs).
Further discussion: Is it feasible to absorb all prior multidimensional boundaries in inverse sample, and completely clean prior.call.
While this is cleaner and makes MultiNest explicitly aware of the boundaries, it is not straightforward to implement this especially for complex and multidimensional boundaries. The efficiency gain is limited, since the time spent on the out-of-bounds points is very limited anyways.
We have found that the only line in likelihood.py where less_than_zero is used, can be replaced by using random_near_zero instead. This addressed a problem we experienced, where the sampler got stuck in an infinite loop. This is a single line change but we'd like to make a serious attempt to verify first:
We aim to address these questions by consulting the literature of the multinest and x-psi algorithms, their implementations, and doing some comparison runs.
The text was updated successfully, but these errors were encountered: