Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More clarifications of the "image-label" spec #105

Open
constantinpape opened this issue Feb 28, 2022 · 4 comments
Open

More clarifications of the "image-label" spec #105

constantinpape opened this issue Feb 28, 2022 · 4 comments

Comments

@constantinpape
Copy link
Contributor

I am working on converting data with segmentations to ngff using the "image-label"s.
There are a couple of questions about the description in https://ngff.openmicroscopy.org/latest/#label-md:

  1. Are "colors" and/or "properties" mandatory or optional? This is not clear from the spec description.
  2. Are "colors" / "properties" sparse? E.g.g if I have label values 1, 2, 3, 4 is it valid to just specify: colors: [{label-value: 1, rgba: [...]}]?
  3. Is it valid to specify multiple "image-label" sources per image?

For my use case, the preferred answers would be 1: optional (segmentation with many label values, specifying a color per label is not necessary and would result in huge jsons). 2: sparse (could give properties for selected labels), 3.: should be valid because there is a nucleus and cell segmentation per image.

@sbesson
Copy link
Member

sbesson commented Mar 1, 2022

Cross-linking to ome/omero-ms-zarr#71 and #3 where the original image-label specification and the properties extension was proposed.

  1. @DragaDoncila and/or @joshmoore might want ot comment on this but my understanding is that both keys are optional (maybe RECOMMENDED)
  2. there had been discussion around value duplication as well as the format of the colors (see Revamp color metadata omero-ms-zarr#62) but I cannot find any consensus on whether all label values MUST be defined in the dictionaries. Similarly to above, in the absence of a clear MUST in the specification, my assumption is that it's not a requirement in the current version although I would tend to mark this as RECOMMENDED
  3. at least in https://www.openmicroscopy.org/2021/12/16/ome-ngff.html, there is an example of multiple segmentations Cell & Chromosome) for the OME-NGFF dataset generated from idr0052. Here each segmentation is stored as a separate Zarr group with its image-label and multiscales metadata so there was no use case for multiple image-label. How does the data storage look like in your scenario?

@constantinpape
Copy link
Contributor Author

  1. @DragaDoncila and/or @joshmoore might want ot comment on this but my understanding is that both keys are optional (maybe RECOMMENDED)

Ok, this would be good, but should be clarified in the spec. (I would personally not recommend colors; this is not a good fit for representing instance segmentations, which can easiliy have 10th to 100th of thousands of label values).

2. [...] Similarly to above, in the absence of a clear MUST in the specification, my assumption is that it's not a requirement in the current version although I would tend to mark this as RECOMMENDED

This would be fine by me; but again, I think the spec should clearly state this to avoid ambiguity.

3. Here each segmentation is stored as a separate Zarr group with its image-label and multiscales metadata so there was no use case for multiple image-label. How does the data storage look like in your scenario?

I had a look at idr0052 and this matches the data layout I need very well, so I will base my script on this example.
There are two minor differences:

  • the segmentations are 4x smaller than the original image (fine with scale in v0.4)
  • we have multiple positions and a ome.tif for each position; these are however from a single coordinate space, so it could make sense to merge into a single image in ome.zarr; but this would require some special treatment for the instance segmentation; I will discuss this with shila.

@sbesson
Copy link
Member

sbesson commented Mar 1, 2022

Yes to all the above re clarifying the specification wherever needed

we have multiple positions and a ome.tif for each position; these are however from a single coordinate space, so it could make sense to merge into a single image in ome.zarr; but this would require some special treatment for the instance segmentation; I will discuss this with shila.

You are right: for each embryo of this dataset, the different positions (field of views) is part of the same coordinate space overall. Ideally a user would like to access the full image according to the relative positions of the acquisitions.
My initial thought was to focus on generating OME-NGFF representation (image + label) of each position, partly because I think we have a specification that covers these requirements and partly because I am not 100% sure of the best strategy to do the merging.
One possibility is that the ongoing transformation/spaces work in #101 #84 would allow to specify metadata to register different multiscale images relative to another in the same coordinate space and allow clients to build a merged representation. An alternative approach would be to stitch the arrays at the multiscale level and create a single multi-resolution OME-NGFF image.
Definitely happy to hear what Shila thinks of the above and/or have a follow-up discussion if needed.

@constantinpape
Copy link
Contributor Author

One possibility is that the ongoing transformation/spaces work in #101 #84 would allow to specify metadata to register different multiscale images relative to another in the same coordinate space and allow clients to build a merged representation

Yes, I think that this would be the best solution in principle. Note that we do have all transformations required for this already (scale and translation), but would need to have a way to specify this in a collection, which is not possible yet as far as I can see.

An alternative approach would be to stitch the arrays at the multiscale level and create a single multi-resolution OME-NGFF image.

Yes, that would be the solution that's available now.

Definitely happy to hear what Shila thinks of the above and/or have a follow-up discussion if needed.

I will ask her; I think we have two options: if we need to have everything in the same coordinate space now we need to merge the positions in a single image, otherwise it would be better to wait and add this to the user stories and then develop the collection spec to support this use-case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants