-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
List-mode objective function: minor refactor and extra functionality #1418
List-mode objective function: minor refactor and extra functionality #1418
Conversation
f57ce57
to
2270c16
Compare
step 2 commited, but currently failing test. It'll be for tomorrow. |
b168163
to
854a8ee
Compare
@NikEfth Steps 1&2 completed. Variable naming is a bit awkard, but minimises text changes. Best to compare ignore white-space diffs: https://github.com/UCL/STIR/pull/1418/files?diff=unified&w=1 |
Preparation is nearly done. Results for the gradient are still identical. I plan to template the loop function, and then call it |
24f369d
to
df5058a
Compare
The Hessian calculation should now be functional, but it is untested ATM. |
5c50e1d
to
dcf76a0
Compare
@gschramm conducted some timings via SIRF (but that shouldn't matter). Original post at SyneRBI/SIRF#1253 (comment)
@gschramm My interpretation:
Note that LM times could improve when using |
e0e4f55
to
1c86e4a
Compare
- split caching in separate functions for reading a batch from list-mode file or cache - always use LM_distributable_computation for gradient Resulting code avoids duplication and is faster when caching is not used.
The code pre-allocated bins for every thread, but local variables are allocated on the stack, which should take no time. Removing th epre-allocations simplifies the code, and prepares us for re-use for Hessian calculation etc
- make list-mode loop into templated function - introduce separate functions for gradient and Hessian
- move LM_distributable_function implementation to .txx as it now uses a template for the callback - implement actual_accumulate_Hessian... for LM - try to implement accumulation of log-likelihood WARNING: LM_distributable_computation is now generic and incompatible with the previous one (which was only for the gradient) Hessian function and log-likelihood value are currently untested/
Also expand test to use all subsets, make it slower of course.
test_PoissonLogLikelihoodWithLinearModelForMeanAndProjData and test_priors contained nearly similar code, so moved it to a new class. This needed minor modifications to the tests, but also addition of GeneralisedObjectiveFunction::compute_value()
- compute_objective_function() is now implemented as well - Tests are (almost) a duplicate of the ProjData version, but gradient-test is disabled as for this sample data it is too slow
Also update to upload-artefacts@v4
1c86e4a
to
fe6bc39
Compare
also minor clean-up
The MacOS job fails on the Hessian-concavity test. The output of H*dx is 0 on MacOS, but not on other systems. I have no idea why. Most likely it doesn't read the events properly, but it does read the correct number of events, so that's hard to understand. Here's the failure diagnostics
|
All done, or as far as I can go. The objective function value is also computed and tested. Note that some of the numerical tests aren't very stringent, and they probably can't be as there are only 8019 prompts in the file. I will have to disable the MacOS job for later |
Cannot figure out what the problem is, but the test works on Ubuntu and Windows
4635891
to
490771b
Compare
This PR is intended to add objective function value as well as Hessian to the list-mode objective function. To avoid duplication between cached and non-cached versions, the plan is as follows:
LM_distributable_computation
by reading the batch either from cache or from list-mode.@gschramm @NikEfth Feedback welcome!