Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle multiple volume resolutions for hybrid annotations #286

Closed
jfrohnhofen opened this issue Mar 25, 2014 · 7 comments · Fixed by #4755
Closed

Handle multiple volume resolutions for hybrid annotations #286

jfrohnhofen opened this issue Mar 25, 2014 · 7 comments · Fixed by #4755

Comments

@jfrohnhofen
Copy link
Contributor

Log Time

@jfrohnhofen jfrohnhofen self-assigned this Mar 25, 2014
@jfrohnhofen jfrohnhofen added this to the 1.2 milestone Mar 29, 2014
@tmbo tmbo modified the milestones: Dezember, Februrary Feb 4, 2015
@tmbo tmbo modified the milestone: Februrary Dec 11, 2015
@jfrohnhofen jfrohnhofen removed their assignment Mar 13, 2016
@boergens boergens mentioned this issue Aug 10, 2016
1 task
@tmbo
Copy link
Member

tmbo commented Oct 20, 2016

Comment from Michael Morehead:

"To facilitate the discussion I'd like to define some terms:
In terms of microscopy, the highest zoom level is the highest resolution (zoomed in, most detail, etc). In WebKnossos, the Zoom value in inverted, where closer to 0 is more zoomed in (makes sense for WK, as you move away from 0). Let's refer to this layer (highest zoom level in microscopy, level 0 in WK) as the bottom layer of the image pyramid.

I've discussed this issue with our lead annotators and scientists, and I can see there's no silver bullet here. From a developer's perspective, the easiest thing to do would to project and store all annotations at the bottom layer (highest resolution). If you are segmenting at a higher layer on the pyramid, your segmentation resolution is limited to one pixel square at your current layer. When that segmentation pixel is projected, it will appear as a block of pixels at the bottom layer. Although this lose of resolution isn't ideal, I believe it is acceptable.

In our case, volume segmentation is only allowed at a Zoom value of < ~1.25. We do not really need to segment at extremely high values like 50 (totally zoomed out, can see whole volume). If we could segment at levels around 5, we could clearly see the entirety of the structure. Then, if further details are needed, we can zoom back in to a low zoom value (1) and finesse the segmentation.

As for order of annotation, if we project all segmentation to the bottom layer, then all annotation lives in that space. In this case it seems that order no longer matters that much, any modification will overwrite the previous annotation.

Let me know what you think, there's probably an edge case I haven't thought about. "

@jfrohnhofen
Copy link
Contributor Author

Not sure this reply will ever reach Michael (so please feel free to forward), but even if not, this might still serve as a reference for later.

The problem with the above approach is not so much about edge cases as it is about scaling issues. Since we are working wit 3D data, the size of the projected block of pixels grows exponentially with base 8 with respect to the number of levels. For level 5 annotation this yields 128KB of data for each single voxel (assuming 32bit IDs). While data compression will certainly greatly reduce the number of actually saved bytes, the server will still have to process an unnecessarily large amount of data. I would rather suggest an approach originally thought up by @tmbo some time ago.

When a voxel in annotated at some level, the annotation is saved to that layer and propagated to all layers above (above = lower resolution layers). During the propagation, 8 voxels will have to be combined to form a single voxel at each step (e.g. using majority voting). Additionally, the voxel in the originally annotated layer is marked with a flag, while the flag of all voxels changed during propagation is cleared. While the propagation also requires additional data to be written, the amount of additional data produced is quite modest (~15%) compared to the simpler approach.

When data is then viewed on a lower (= higher resolution) layer, the annotation has to be reconstructed from the different layers. The reconstruction works bottom-up, with voxels marked with the flag overwriting voxels in lower layers.
This approach should be identical from a user's perspective in terms of loss of resolution. While being more difficult to get right in the implement, it is the only approach (I am aware of) that seems to scale well enough.

@jfrohnhofen jfrohnhofen self-assigned this Apr 24, 2017
@jfrohnhofen jfrohnhofen removed their assignment Aug 7, 2017
@philippotto philippotto reopened this Oct 18, 2018
@philippotto
Copy link
Member

let's do "volume annotations for an arbitrary, pre-selected magnification" as a first step. the following things need to be done:

  • integrate into tasks
  • NMLS
  • UI
  • ...

@philippotto
Copy link
Member

For hybrid tracings, this topic becomes very relevant again. With the current constraints, an existing segmentation would just disappear in mag 2 and higher when opening a new hybrid tracing.

A simplistic solution could be to show the fallback segmentation in higher mags by default. If a volume tracing exists in mag 1, there could be a warning that mag 2 and higher will show an "out of sync" segmentation? Also, the user could select to hide the segmentation in higher mags to suppress the warning? Something along these lines?

@philippotto philippotto changed the title Implement multiple resolution for volume annotations Handle multiple volume resolutions for hybrid annotations Jul 29, 2020
@philippotto
Copy link
Member

We should also consider the case where one creates a hybrid tracing for a dataset where the segmentation only exists in mag 2 (also see #4339). Maybe my above suggestion covers fine. That's probably up for discussion.

Also see #4471 for a not-annoying way to communicate to the user of critical information (as "the different mags are out of sync").

@MichaelBuessemeyer
Copy link
Contributor

MichaelBuessemeyer commented Aug 13, 2020

I think this issue isn't really up-to-date anymore, as we discussed the following:

  • The user always annotates volumes for all magnifications, independent of the magnification the user is currently in. Thus there is no way that the different volume magnifications are out of sync.

Or am I missing something?

When there is no fallback layer for mag 1 -> The backend should upsample the mag 2 fallback layer to mag 1 and then this upsampled fallback layer should be used in mag 1 and merged with the annotation in mag 1.

See: #4755

@philippotto
Copy link
Member

You are completely right 👍 The issue wasn't updated after our discussion.

@fm3 fm3 closed this as completed in #4755 Nov 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants