-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possibility to cache projection data? #783
Comments
In usual CT applications the FBP/FDK reconstructions are processed as soon as data starts streaming in, so one essentially has a 3D reconstruction when the data acquisition is finished. For adaptive CT where the geometry is to be adapted, I guess such treatment is a necessity. |
That would be a cool feature, but we don't have support for streaming protocols in ODL. So we always have a dataset when we start reconstructing.
Yes, so at the current stage it's mainly a tool for experimenting, basically to test how fast you can get with single slice reconstructions. I'm testing it on FBP but nobody says you can't use the same approach for iterative or variational methods. |
I feel that this should not go into Edit: Posted too soon. Reasoning behind this is that this is a very specific feature that will be hard to keep updated and working for all cases. It would be much easier to keep this in a low level interface. Also, the doc would become worse than it is currently. |
With regards to |
Will this ever be done? |
How badly do we need it? I have the feeling that "cache everything in GPU memory" doesn't cover that many use cases where speed is an issue. Everything has to fit, which is limiting. In the long run, wouldn't a memory pointer and ASTRA's |
I'm currently playing around with FBP stuff, reconstructing only single arbitrary slices. To speed up computations, it would be nice if a backprojector could cache projection data in ASTRA memory via an extra option. The suggested implementation would be dead simple, namely
RayBackProjection
(off by default)x is stored_x
then use the existing memory, else delete the object and store new. Kind of a LRU cacheThoughts on that?
The text was updated successfully, but these errors were encountered: