diff --git a/docs/_toc.yml b/docs/_toc.yml index 75e3da677..e0f866866 100644 --- a/docs/_toc.yml +++ b/docs/_toc.yml @@ -65,7 +65,6 @@ subtrees: - file: guides/event_loop - file: guides/threading - file: guides/3D_interactivity - - file: guides/rendering-explanation - file: guides/rendering - file: guides/performance - file: guides/preferences diff --git a/docs/guides/rendering-explanation.md b/docs/guides/rendering-explanation.md deleted file mode 100644 index 5c791c2c5..000000000 --- a/docs/guides/rendering-explanation.md +++ /dev/null @@ -1,320 +0,0 @@ -(rendering-explanation)= -# Rendering in napari - -## Status - -As of napari version 0.4.3 there are two opt-in experimental features -related to rendering. They can be accessed by setting the environment -variables `NAPARI_ASYNC=1` or `NAPARI_OCTREE=1`. See the Guide on Rendering -for specific information about those two features. This document is more of -a general backgrounder on our approach to rendering. - -## Framerate - -The most common screen refresh rate is 60Hz, so most graphics applications -try to draw at least 60Hz as well. If napari renders at 60Hz then any -motion, for example from panning and zooming the camera, will appear -smooth. If 60Hz cannot be achieved, however, it's important that napari -render as fast as possible. The user experience degrades rapidly as the -framerate gets slower: - -| Framerate | Milliseconds | User Experience | -| --------: | -----------: | :-------------- | -| 60Hz | 16.7 | Great | -| 30Hz | 33.3 | Good | -| 20Hz | 50 | Acceptable | -| 10Hz | 100 | Bad | -| 5Hz | 200 | Unusable | - -The issue is not just aesthetic. Manipulating user interface elements like -sliders becomes almost impossible if the framerate is really slow. This -creates a deeply frustrating experience for the user. Furthermore, if -napari "blocks" for several seconds, the operating system might indicate to -the user that the application is hung or has crashed. For example macOS -will show the "spinning wheel of death". This is clearly not acceptable. - -A fast average framerate is important, but it's also important that napari -has as few isolated slow frames as possible. A framerate that jumps around -leads to something called [jank](http://jankfree.org/). For the best user -experience we want a framerate that's fast, but also one that's -consistently fast. - -## Array-like interface - -Napari renders data out of an array-like interface. The data can be owned -by any object that supports `NumPy`'s slicing syntax. One common such -object is a [Dask](https://www.dask.org/) array. The fact that napari can -render out of any array-like data is flexible and powerful, but it means -that simple array accesses can result in the execution of arbitrary code. -For example, an array access might result disk IO or network IO, or even a -complex machine learning computation. This means array accesses can take an -arbitrary long time to complete. - -## Asynchronous rendering - -Since we don't know how long an array access will take, and we never want -the GUI thread to block, we cannot access array-like objects in the GUI -thread. Instead, napari's rendering has to be done _asynchronously_. This -means rendering proceeds at full speed drawing only the data which is in -memory ready to be drawn, while in the background worker threads load more -data into memory to be drawn in the future. - -This necessarily means that napari will sometimes have to draw data that's -only partially loaded. For example, napari might have to show a lower -resolution version of the data, such that the data appears blurry until the -rest of the data has loaded in. There might even be totally blank portions -of the screen. - -Although showing the user partial data is not ideal, it's vastly better -than letting the GUI thread block and napari hang. If napari stays -responsive the user stays in control. The user can sit still and watch the -data load in, or they can navigate somewhere else entirely, they are free -to choose. - -Issues that napari has without asynchronous rendering include -[#845](https://github.com/napari/napari/issues/845), -[#1300](https://github.com/napari/napari/issues/1300), and -[#1320](https://github.com/napari/napari/issues/1320]). - -## RAM and VRAM - -There is a two step process to prepare data for rendering. First the data -needs to be loaded in RAM, then it needs to be transferred from RAM to -VRAM. Some hardware has "unified memory" where there is no actual VRAM, but -there is still a change of status when data goes from raw bytes in RAM to a -graphics "resource" like a texture or geometry that can be drawn. - -The transfer of data from RAM to VRAM must be done in the GUI thread. -Worker threads are useful for loading data into RAM in the background, but -we cannot load data into VRAM in the background. Therefore to prevent -hurting the framerate we need to budget how much time is spent copying data -into VRAM, we can only do it for a few milliseconds per frame. - -![A diagram that shows how chunks of data are loaded from storage into RAM then VRAM. Each chunk is a row in a table. Each column represents a memory store or processing context. Paging and compute threads are used to load data from storage to RAM. The GUI thread is used to load data from RAM to VRAM. A subset of the rows are highlighted to show the working set of memory.](images/paging-chunks.png) - -## Chunks - -For paging into both RAM and VRAM it's a requirement that the data napari -renders is broken down into "chunks". A chunk is a deliberately vague term -for a portion of the data that napari can load and render independently. - -The chunk size needs to be small enough that the renderer can at least load -one chunk per frame into VRAM without a framerate glitch, so that over time -all chunks can be loaded into VRAM smoothly. However using chunks that are -too small is wasteful, since there is some overhead for each chunk. - -Napari's chunks play a similar role as do packets on a network or blocks on -a disk. In all cases the goal is to break down large data into digestible -pieces of that can be processed smoothly one at a time. - -## Renderer requirements - -The above discussion leads to two rigid requirements for rendering: - -1. Never call `asarray` on user data from the GUI thread, since we don't know - what it will do or how long it will take. -2. Always break data into chunks. The exact maximum chunk size is TBD. - -## Render algorithm - -The renderer computes a **working set** of chunks based on the current -view. The working set is the set of chunks that we want to draw in order to -depict the current view of the data. The renderer will step through every -chunk in the working set and do one of these three things: - -| Case | Action | -| :--------------------------- | :------------------------------------------ | -| Chunk is in VRAM | Render the chunk | -| Chunk is in RAM but not VRAM | Transfer the chunk to VRAM if there is time | -| Chunk is not in RAM | Ask the `ChunkLoader` to load the chunk | - -The important thing about this algorithm is that it never blocks. It draws -what it can draw without blocking, and then it loads more data so that it -can draw more in the future. - -### Chunked file formats - -Napari's rendering chunks will often correspond to blocks of contiguous -memory inside a chunked file format like -[Zarr](https://zarr.readthedocs.io/en/stable/), and exposed by an API like -Dask. The purpose of a chunked file format is to spatially organize the -data so that one chunk can be read with one single read operation. - -![chunked-format](images/chunked-format.png) - -For 2D images "chunks" are 2D tiles. For 3D images the chunks are 3D -sub-volumes. -[Neuroglancer](https://opensource.google/projects/neuroglancer) recommends -that data is stored in 64x64x64 chunks, which means that each chunk -contains 262,144 voxels. Those 256k voxels can be read with one read -operation. Using cubic chunks is nice because you get the same performance -whether you are viewing the data in XY, XZ or YZ orientations. It's also -nice because you can scroll through slices quickly since on average 32 -slices above and below your current location are already in RAM. - -### Render chunks - -If a chunked file format is available, and those chunks are reasonably -sized, then Napari can use those chunks for rendering. If chunks are not -available, for example with issue -[#1300](https://github.com/napari/napari/issues/1300), or the chunks are -too large, then Napari will have to break the data into potentially smaller -"render chunks". - -Note that with issue [#1320](https://github.com/napari/napari/issues/1320) -the images are small so they are not chunked, but in that issue there are 3 -image **layers** per slice. In that case the *image layers are our chunks*. -In general we can get creative with chunks, they can be spatial or -non-spatial subdivisions. As long as something can be loaded and drawn -independently it can be a chunk. - -## Example: Computed layers - -In [#1320](https://github.com/napari/napari/issues/1320) the images are not -chunked since they are very small, but there are 3 layers per slice. These -per-slice layers are our chunks. Two layers are coming off disk quickly, -while one layer is computed, and that can take some time. - -Without asynchronous rendering we did not draw any of the layers until the -slowest one was computed. With asynchronous rendering the user can scroll -through the paged layers quickly, and then pause a bit to allow the -computed layer to load in. Asynchronous rendering greatly improves the -user's experience in this case. - -![example-1320](images/example-1320.png) - -## Octree - -The `NAPARI_ASYNC` flag enables the experimental `ChunkLoader` which -implements asynchronous loading. One step beyond this is `NAPARI_OCTREE` -which replaces the regular `Image` class with a new class called -`OctreeImage`, and replaces the `ImageVisual` with a new `TiledImageVisual`. - -The advantage of `OctreeImage` over `Image` is that it renders multi-scale -images using tiles. This is much more efficient that one `Image` did -particularly for remote data. - -An Octree is a hierarchical spatial subdivision datastructure. See Apple's -nice [illustration of an -octree](https://developer.apple.com/documentation/gameplaykit/gkoctree): - -![octree](images/octree.png) - -Each level of the Octree contains a depiction of the entire dataset, but at -a different level of detail. In napari we call the data at full resolution -level 0. Level 1 is the entire data again, but downsampled by half, and so -on for each level. The highest level is typically the first level where the -downsampled data fits into a single tile. - -For 2D images the Octree is really just a Quadtree, but the intent is that -we'll have one set of Octree code that can be used for 2D images or 3D -volumes. So we use the name Octree in the code for both cases. - -A key property of the Octree is that if the user is looking at the data at -one level of detail, it's trivial to find the same data at a higher or -lower level of detail. The data is spatially organized so it's fast and -easy to jump from one level fo detail to another. - -## Sparse Octree - -Napari does not construct or maintain an Octree for the whole dataset. The -Octree is created on the fly only for the portion of the data napari is -rendering. For some datasets level 0 of the Octree contains tens of -millions of chunks. No matter how little data we stored per chunk, it would -be slow and wasteful to create an octree that contains all of the data. So -we only create the Octree where the camera is actively looking. - -## Beyond images - -Images are the marquee data type for napari, but napari can also display -geometry such as points, shapes and meshes. The `ChunkLoader` and Octree -will be used for all layer types, but there will be additional challenges -to make things work with non-image layers: - -1. Downsampling images is fast and well understood, but "downsampling" - geometry is called decimation and it can be slow and complicated. Also - there is not one definitive decimation, there will be trade-offs between - speed and quality. -2. Sometimes we will to want downsample geometry into a format that - represents the data but does not look like the data. For example we - might want to display a heatmap instead of millions of tiny points. This - will require new code we did not need for the image layers. -3. With images the data density is spatially uniform but with geometry - there might be pockets of super high density data. For example the data - might have millions of points or triangles in a tiny geographic area. - This might tax the rendering in new ways that images did not. - -## Appendix - -### A. Threads and processes - -By default the `ChunkLoader` uses a `concurrent.futures` thread pool. -Threads are fast and simple and well understood. All threads in a process -can access the same process memory, so nothing needs to be serialized or -copied. - -However, a drawback of using threads in Python is that only one thread can -hold the [Global Interpreter Lock -(GIL)](https://medium.com/python-features/pythons-gil-a-hurdle-to-multithreaded-program-d04ad9c1a63) -at a time. This means two threads cannot execute Python code at the same -time. - -This is not as bad as it sounds, because quite often Python threads will -release the GIL when doing IO or compute-intensive operations, if those -operations are implemented in C/C++. Many scipy packages do their heaviest -computations in C/C++. If the GIL is released those threads *can* run -simultaneously, since Python threads are first-class Operating Systems -threads. - -However, if you do need to run Python bytecode fully in parallel, it might -be necessary to use a `concurrent.futures` process pool instead of a thread -pool. One downside of using processes is that memory is not shared between -processes by default, so the arguments to and from the worker process need -to be serialized, and not all objects can be easily serialized. - -The Dask developers have extensive experience with serialization, and their -library contains it's own serialization routines. Long term we might decide -that napari should only support thread pools, and if you need processes you -should use napari with Dask. Basically, we might outsource multi-processing -to Dask. How exactly napari will interoperate with Dask is to be -determined. - -### B. Number of workers - -How many worker threads or processes should we use? The optimal number will -obviously depend on the hardware, but it also might depend on the workload. -One thread per core is a reasonable starting point, but a different number -of workers might be more efficient in certain situations. Our goal is to -have reasonable defaults that most users can live with, but provide -configuration settings for expert users to adjust if needed. - -### C. asyncio - -Python also has a newer concurrency mechanism called -[asyncio](https://docs.python.org/3/library/asyncio.html) which is -different from threads or processes, `asyncio` tasks are similar to -co-routines in other languages. The advantage of asyncio tasks is they are -_much_ lighter weight than threads. - -For example, in theory you can have tens of thousands of concurrent -`asyncio` tasks in progress at the same time. They generally don't run in -parallel, but they can all be in progress in various states of completion -and worked on round-robin. While we have no current plans to use `asyncio` -for rendering, we should keep in mind that it exists and it might be -something we can use down the road. - -### D. VRAM and Vispy - -With OpenGL you cannot directly manage VRAM. Instead we will implicitly -control what's in VRAM based on what [vispy](https://vispy.org/) objects -exist and what objects we are drawing. - -For example, if we page data into memory, but do not draw it, then it's in -RAM but it's not in VRAM. If we then create a vispy object for that chunk -and draw it, the data needed to draw that chunk will necessarily be put -into VRAM by `vispy` and OpenGL. - -Since it takes time to copy data into VRAM, we may need to throttle how -many new vispy objects we create each frame. For example, we might find -that we can only draw two or three new chunks per frame. So if we load ten -chunks, we might need to page that data into VRAM over four or five frames. diff --git a/docs/guides/rendering.md b/docs/guides/rendering.md index 04807afb0..934be5527 100644 --- a/docs/guides/rendering.md +++ b/docs/guides/rendering.md @@ -1,445 +1,350 @@ +--- +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: 0.13 + jupytext_version: 1.10.3 +kernelspec: + display_name: Python 3 + language: python + name: python3 +--- + (rendering)= -# Asynchronous rendering - -As discussed in the explanations document on rendering, asynchronous -rendering is a feature that allows napari to stay usable and responsive -even when data is loading slowly. There are two experimental asynchronous -rendering features, they can be enabled using the environment variables -`NAPARI_ASYNC` and `NAPARI_OCTREE`. - -## NAPARI_ASYNC - -Running napari with `NAPARI_ASYNC=1` enables asynchronous rendering using -the existing {class}`~napari.layers.Image` class. The -{class}`~napari.layers.Image` class will no longer call -`np.asarray()` in the GUI thread. We do this so that if `np.asarray()` -blocks on IO or a computation, the GUI thread will not block and the -framerate will not suffer. - -To avoid blocking the GUI thread the -{class}`~napari.layers.Image` class will load chunks using the -new {class}`~napari.components.experimental.chunk._loader.ChunkLoader` -class. The -{class}`~napari.components.experimental.chunk._loader.ChunkLoader` will -call `np.asarray()` in a worker thread. When the worker thread finishes -it will call {meth}`~napari.layers.Image.on_chunk_loaded` with -the loaded data. The next frame {class}`~napari.layers.Image` -can display the new data. - -Without `NAPARI_ASYNC` napari will block when switching slices. Napari -will hang until the new slice has loaded. If the slice loads slowly enough -you might see the "spinning wheel of death" on a Mac indicating the process -is hung. - -Asynchronous rendering allows the user to interrupt the loading of a slice -at any time. The user can freely move the slice slider. This is especially -nice for remote or slow-loading data. - -### Multi-scale images - -With today's {class}`~napari.layers.Image` class there are no -tiles or chunks. Instead, whenever the camera is panned or zoomed napari -fetches all the data needed to draw the entire current canvas. This -actually works amazingly well with local data. Fetching the whole canvas of -data each time can be quite fast. - -With remote or other high latency data, however, this method can be very -slow. Even if you pan only a tiny amount, napari has to fetch the whole -canvas worth of data, and you cannot interrupt the load to further adjust -the camera. - -With `NAPARI_ASYNC` overall performance is the same, but the advantage is -you can interrupt the load by moving the camera at any time. This is a nice -improvement, but working with slow-loading data is still slow. Most large -image viewers improve on this experience with chunks or tiles. With chunks -or tiles when the image is panned the existing tiles are translated and -re-used. Then the viewer only needs to fetch tiles which newly slid onto -the screen. This style of rendering is what the `NAPARI_OCTREE` flag -enables. - -## NAPARI_OCTREE - -Set `NAPARI_OCTREE=1` to use the experimental -{class}`~napari.layers.image.experimental.octree_image.OctreeImage` class -instead of the normal {class}`~napari.layers.Image` class. The -new {class}`~napari.layers.image.experimental.octree_image.OctreeImage` -class will use the same -{class}`~napari.components.experimental.chunk._loader.ChunkLoader` that -`NAPARI_ASYNC` enables. In addition, `NAPARI_OCTREE` will use the new -{class}`~napari._vispy.experimental.tiled_image_visual.TiledImageVisual` -instead of the Vispy `ImageVisual` class that napari's -{class}`~napari.layers.Image` class uses. - -```{note} -The current `OCTREE` implementation only fully supports a single 2D image and -may not function with 3D images or multiple images. Improving support -for 3D and multiple images is part of future work on the `OCTREE`. +# Rendering + +This document explains how napari produces a 2- or 3-dimensional render in the canvas from layers' n-dimensional array-like data. +The intended audience is someone who wants to understand napari's rendering pipeline to help optimize its performance for their usage, +or someone who wants to contribute to and help improve the clarity or performance of napari's rendering pipeline itself. + +## Overview + +At a high level, rendering in napari is simple. + +1. Viewing: {attr}`ViewerModel.dims` defines which 2D or 3D region is currently being viewed. +2. Slicing: [`Layer._slice_dims`](https://github.com/napari/napari/blob/b3a8dd22895c913d8183735f52b9d1d71c963d7f/napari/layers/base/base.py#L1184) loads the corresponding 2D or 3D region of the layer's ND data into RAM. +3. Drawing: [`VispyBaseLayer._on_data_change`](https://github.com/napari/napari/blob/b3a8dd22895c913d8183735f52b9d1d71c963d7f/napari/_vispy/layers/base.py#L126) pushes the 2D or 3D sliced data from RAM to VRAM to be drawn on screen. + +But as the details of this document reveal, rendering in napari is, in fact, very complicated. + +Consider some of the more important reasons for this. + +- Multiple layers can have different extents with different transforms. +- Different layer types (e.g. Images vs. Points) handle slicing differently. +- Layer data can be large or slow to load into RAM. +- Sliced layer data may exceed the maximum texture size supported by the GPU. +- There are experimental settings that enable asynchronous slicing. + +As a result, rendering in napari is the source of many bugs and performance problems that we are actively trying to fix and improve. + +This document describes napari's simple rendering paths with pointers to the more powerful, unusual, and complicated ones. + +We will use scikit-image's 3D cells data as a running example throughout this documentation. + +```{code-cell} python +import napari + +viewer = napari.Viewer() +viewer.open_sample('napari', 'cells3d') +``` + +```{code-cell} python +:tags: [hide-input] +from napari.utils import nbscreenshot + +nbscreenshot(viewer, alt_text="3D cell nuclei and membranes rendered as 2D slices in the napari viewer") +``` + +## Dimensions + +The region visible in napari's canvas is almost entirely determined by the state in {attr}`ViewerModel.dims`. +Changes to the attributes and properties of this class typically represent changes to that region, which then triggers slicing and rendering +so that the canvas is updated to present layers' data in that region. + +### Range and world extent + +{attr}`Dims.range` describes the extent of all layers in their shared world coordinate system, +in addition to the `step` that should be taken in each dimension as its corresponding slider position changes. + +In our running example, the range + +```{code-cell} python +viewer.dims.range +``` + +is solely determined by the shape of the data + +```{code-cell} python +viewer.layers[0].data.shape +``` + +because the layers have the same shape and identity transforms. + +### Point and selective slicing + +{attr}`Dims.point` describes the coordinates of the current slice plane in that same world coordinate system. + +In our running example, the default 2D view defines {attr}`Dims.point` to be + +```{code-cell} python +viewer.dims.point +``` + +which represents the mid-point through all three dimensions. +As the last two dimensions are visualized in the canvas, this represents the 2D plane through the middle of the first dimension. + +Only the dimensions in {attr}`Dims.not_displayed` have meaningful values in +{attr}`Dims.point` because all data in displayed dimensions is retained in a slice, +even though those data may not be visible in the canvas due to the current camera parameter values. + +The current slice plane can be changed using the sliders or using the API directly + +```{code-cell} python +viewer.dims.point = (0, 0, 0) +``` + +```{code-cell} python +:tags: [hide-input] +nbscreenshot(viewer, alt_text="3D cell nuclei and membranes rendered as 2D slices in the napari viewer") +``` + +again noting that the last two values are meaningless, but must be provided when using the API in this way. + +### Margins and thick slicing + +napari's API also has some support for performing thick slicing, which integrates over sub-volumes of data instead of selecting sub-regions. + +{attr}`Dims.margin_left` and {attr}`Dims.margin_right` +are offsets around {attr}`Dims.point` that define the start and end-points for that integration. +{attr}`Dims.thickness` is simply the sum of the two margins. + +By default, these thick slicing attributes are all zero + +```{code-cell} python +print(f'{viewer.dims.margin_left=}') +print(f'{viewer.dims.margin_right=}') +print(f'{viewer.dims.thickness=}') +``` + +The margins can be changed individually to define an asymmetric window around `point`, +but it is more common to change {attr}`Dims.thickness` which defines a symmetric window instead + +```{code-cell} python +viewer.dims.point = (29, 0, 0) +viewer.dims.thickness = (16, 0, 0) +print(f'{viewer.dims.margin_left=}') +print(f'{viewer.dims.margin_right=}') +``` + +In order for these parameters to have an effect on slicing a layer, that layer must support thick slicing and must define an interesting `projection_mode`. +For example, we can use the mean data over the slicing region for one of the layers + +```{code-cell} python +viewer.layers[1].projection_mode = 'mean' +``` + +which takes an arithmetic mean across the slices in the window defined by the margins. +This effectively smooths the rendered slice across that window, which is particularly helpful when each individual slice is noisy. + +```{code-cell} python +:tags: [hide-input] +nbscreenshot(viewer, alt_text="3D cell nuclei and membranes rendered as 2D slices in the napari viewer") ``` -See {ref}`octree-config` for Octree configuration options. - -### Octree visuals - -The visual portion of Octree rendering is implemented by three classes: -{class}`~napari._vispy.experimental.vispy_tiled_image_layer.VispyTiledImageLayer`, -{class}`~napari._vispy.experimental.vispy_tiled_image_visual.TiledImageVisual`, -and {class}`~napari._vispy.experimental.texture_atlas.TextureAtlas2D`. - -The first two classes are named "tiled image" rather than "octree" because -currently they do not know that they are rendering out of an octree. We did -this intentionally to keep the visuals simpler and more general. However, -the approach has some limitations, and we might later need to create a -subclass of -{class}`~napari._vispy.experimental.vispy_tiled_image_visual.TiledImageVisual` -which is Octree-specific, see {ref}`future-work-atlas-2D`. - -The {class}`~napari._vispy.experimental.texture_atlas.TextureAtlas2D` class -is a subclass of the generic Vispy ``Texture2D`` class. Like ``Texture2D`` -the {class}`~napari._vispy.experimental.texture_atlas.TextureAtlas2D` class -owns one texture. However -{class}`~napari._vispy.experimental.texture_atlas.TextureAtlas2D` uses this -one texture as an "atlas" which can hold multiple tiles. - -For example, by default -{class}`~napari._vispy.experimental.texture_atlas.TextureAtlas2D` uses a -(4096, 4096) texture that stores 256 different (256, 256) pixel tiles. -Adding or remove a single tile from the full atlas texture is very fast. -Under the hood adding one tile calls `glTexSubImage2D()` which only -updates the data in that specific (256, 256) portion of the full texture. - -Aside from the data transfer cost, -{class}`~napari._vispy.experimental.texture_atlas.TextureAtlas2D` is also -fast because we do not have to modify the scene graph or rebuild any -shaders when a tile is added or removed. In an early version of tiled -rendering we created a new `ImageVisual` for every tile. This resulted in -scene graph changes and shader rebuilds. At the time the scene graph -changes were causing crashes with `PyQt5`, but the atlas approach is better -for multiple reasons, so even if that crash were fixed the atlas is a -better solution. - -### Octree rendering - -The interface between the visuals and the Octree is the -{class}`~napari.layers.image.experimental.octree_image.OctreeImage` method -{meth}`~napari.layers.image.experimental.octree_image.OctreeImage.get_drawable_chunks`. -The method is called by the -{class}`~napari._vispy.experimental.vispy_tiled_image_layer.VispyTiledImageLayer` -method -{meth}`~napari._vispy.experimental.vispy_tiled_image_layer.VispyTiledImageLayer._update_drawn_chunks` -every frame so it can update which tiles are drawn. -{class}`~napari.layers.image.experimental.octree_image.OctreeImage` calls -the -{meth}`~napari.layers.image.experimental._octree_slice.OctreeSlice.get_intersection` -method on its -{class}`~napari.layers.image.experimental._octree_slice.OctreeSlice` to get -an -{class}`~napari.layers.image.experimental.octree_intersection.OctreeIntersection` -object which contains the "ideal chunks" that should be drawn for the -current camera position. - -The ideal chunks are the chunks at the preferred level of detail, the level -of detail that best matches the current canvas resolution. Drawing chunks -which are more detailed that this will look fine, the graphics card will -downsample them to the screen resolution, but it's not efficient to use -higher resolution chunks than are needed. Meanwhile drawing chunks that are -coarser than the ideal level will look blurry, but it's much better than -drawing nothing. - -The decision about what level of detail to use is made by the -{class}`~napari.layers.image.experimental._octree_loader.OctreeLoader` -class and its method -{meth}`~napari.layers.image.experimental._octree_loader.OctreeLoader.get_drawable_chunks`. -There are many different approaches one could take here as far as what to -draw when. Today we are doing something reasonable but it could potentially -be improved. In addition to deciding what level of detail to draw for each -ideal chunk, the class initiates asynchronous loads with the -{class}`~napari.components.experimental.chunk._loader.ChunkLoader` for -chunks it wants to draw in the future. - -The loader will only use chunks from a higher resolution if they are -already being drawn. For example when zooming out. However, it will never -initiate loads on higher resolution chunks, since it's better off loading -and drawing the ideal chunks. - -The loader will load lower resolution chunks in some cases. Although this -can slightly delay when the ideal chunks are loaded, it's a very quick way -to get reasonable looking "coverage" of the area of interest. Often data -from one or two levels up isn't even that noticeably degraded. This table -shows how many ideal chunks are "covered" by a chunk at a higher level: - -| Levels Above Ideal | Coverage | -| -----------------: | -------: | -| 1 | 4 | -| 2 | 16 | -| 3 | 64 | - -Although data 3 levels above will be quite blurry, it's pretty amazing you -can load one chunk and it will cover 64 ideal chunks. This is the heart of -the power of Octrees, Quadtrees or multiscale images. - -(octree-config)= -### Octree configuration file - -Setting `NAPARI_OCTREE=1` enables Octree rendering with the default -configuration. To customize the configuration set `NAPARI_OCTREE` to be -the path of a JSON config file, such as `NAPARI_OCTREE=/tmp/octree.json`. - -See {data}`~napari.utils._octree.DEFAULT_OCTREE_CONFIG` for the current -config file format: - -```python -{ - "loader_defaults": { - "log_path": None, - "force_synchronous": False, - "num_workers": 10, - "use_processes": False, - "auto_sync_ms": 30, - "delay_queue_ms": 100, - }, - "octree": { - "enabled": True, - "tile_size": 256, - "log_path": None, - "loaders": { - 0: {"num_workers": 10, "delay_queue_ms": 100}, - 2: {"num_workers": 10, "delay_queue_ms": 0}, - }, - }, -} +Thick slicing is still a work in progress (see [issue #5957](https://github.com/napari/napari/issues/5957)), +so feel free to suggest fixes and improvements. + +## Slicing + +Once the visible region in the world coordinate system is defined, +we need to transform that region into each layer's data coordinate system, +then read the layer's data in that region. + +### Mapping from world to layer dimensions + +The first step maps the shared world dimensions to the layer dimensions. +In the case that all layers have the same number of dimensions, this mapping is just the identity function. + +In the case that layers have different numbers of dimensions, +napari uses the same approach as the [numpy broadcasting rules](https://numpy.org/doc/stable/user/basics.broadcasting.html#broadcasting) +to right-align the dimensions to determine the mapping. + +For example, let's consider the case of one 2D and one 3D layer. + +| World | 0 | 1 | 2 | +| ------- | - | - | - | +| 2DLayer | | 0 | 1 | +| 3DLayer | 0 | 1 | 2 | + +As before, the mapping from the world dimensions to the 3D layer's dimensions is the identity function. +But the mapping from the world dimensions to the 2D layer's dimensions is a little trickier. +In this case, the world's dimensions 1 and 2 map to the 2D layer's dimension 0 and 1 respectively. + +Using our example, we can see this in practice by replacing the membrane layer with its 2D mean projection over its first dimension + +```{code-cell} python +import numpy as np +from napari.layers import Image + +mean_data = np.mean(viewer.layers[0].data, axis=0) +viewer.layers[0] = Image(mean_data, colormap=viewer.layers[0].colormap) +world_dims = np.asarray(viewer.dims.order) +layer0_dims = viewer.layers[0]._world_to_layer_dims( + world_dims=world_dims, ndim_world=3) +layer1_dims = viewer.layers[1]._world_to_layer_dims( + world_dims=world_dims, ndim_world=3) +print(f'{layer0_dims=}') +print(f'{layer1_dims=}') +``` + +where `Layer._world_to_layer_dims` is a private method that is called as a part of slicing. + +For simple cases like the above, this right-alignment approach tends to work well. + +But for more complex cases, it quickly runs into problems. +For example, changing the dimensions that are displayed in the canvas (by changing {attr}`Dims.order`) +causes the mapping to change (see [issue #3882](https://github.com/napari/napari/issues/3882)). + +We would love to fix these problems. +There are few related issues and conversations, but maybe the best way to track our progress is to follow [issue #5949](https://github.com/napari/napari/issues/5949) +which aims to enrich napari's handling of dimensions in general. + +### Mapping from layer world to data coordinates + +After identifying the layer's dimensions that are in view, we need to define how we are going to slice its data across its dimensions that are *not* in view. +In order to do this, we need to map from the layer's world coordinates to the layer's data coordinates. + +This is achieved with `Layer.world_to_data` which transforms the world coordinates +(which take into account the layer's transform properties like `Layer.scale`, `Layer.translate`, and `Layer.affine`) +to data coordinates. + +Using our example, we can see this in practice by transforming the coordinates associated with the current slice plane + +```{code-cell} python +point = viewer.dims.point +layer0_point = viewer.layers[0].world_to_data(point) +layer1_point = viewer.layers[1].world_to_data(point) +print(f'{layer0_point=}') +print(f'{layer1_point=}') ``` -The `loader_defaults` key contains settings that will be used by the -{class}`~napari.components.experimental.chunk._loader.ChunkLoader`. - -| Setting | Description | -| :-------------------- | :--------------------------------------------------------- | -| `log_path` | Write `ChunkLoader` log file to this path. For debugging. | -| `force_synchronous` | If `true` the `ChunkLoader` loads synchronously. | -| `num_workers` | The number of worker threads or processes. | -| `use_processes` | If `true` use worker processes instead of threads. | -| `auto_async_ms` | Switch to synchronous if loads are faster than this. | -| `delay_queue_ms` | Delay loads by this much. | -| `num_workers` | The number of worker threads or processes. | - -The `octree` key contains these settings: - -| Setting | Description | -| :-------------------- | :--------------------------------------------------------- | -| `enabled` | If `false` then use the old `Image` class. | -| `tile_size` | Size of render tiles to use for rending. | -| `log_path` | Octree specific log file for debugging. | -| `loaders` | Optional custom loaders, see below. | - -The `loaders` key lets you define and configure multiple -{class}`~napari.components.experimental.chunk._pool.LoaderPool` pools. The -key of each loader is the level relative to the ideal level. In the above -example configuration we define two loaders. The first with key `0` is for -loading chunks at the ideal level or one above. While the second with key -`2` will load chunks two above the ideal level or higher. - -Each loader uses the `loader_defaults` but you can override the -`num_workers`, `auto_sync_ms` and `delay_queue_ms` values in -each loader defined in `loaders`. - -### Multiple loaders - -We allow multiple loaders to improve loading performance. There are a lot -of different strategies one could use when loading chunks. For example, -we tend to load chunks at a higher level prior to loading the chunks -at the ideal level. This gets "coverage" on the screen quickly, and then -the data can be refined by loading the ideal chunks. - -One consideration is during rapid movement of the camera it's easy to clog -up the loader pool with workers loading chunks that have already moved out -of view. The -{class}`~napari.components.experimental.chunk._delay_queue.DelayQueue` was -created to help with this problem. - -While we can't cancel a load if a worker has started working on it, we can -trivially cancel loads that are still in our delay queue. If the chunk goes -out of view, we cancel the load. If the user pauses for a bit, we initiate -the loads. - -With multiple loaders we can delay the ideal chunks, but we can configure -zero delay for the higher levels. A single chunk from two levels up will -cover 16 ideal chunks. So immediately loading them is a good way to get -data on the screen quickly. When the camera stops moving the -{class}`~napari.components.experimental.chunk._pool.LoaderPool` for the -ideal layer will often be empty. So all of those workers can immediately -start loading the ideal chunks. - -The ability to have multiple loaders was only recently added. We still need -to experiment to figure out the best configuration. And figure out how that -configuration needs to vary based on the latency of the data or other -considerations. - -### Future work: Compatibility with the existing Image class - -The focus for initial Octree development was Octree-specific behaviors and -infrastructure. Loading chunks asynchronously and rendering them as -individual tiles. One question we wanted to answer was will a Python/Vispy -implementation of Octree rendering be performant enough? Because if not, we -might need a totally different approach. It's not been fully proven out, -but it seems like the performance will be good enough, so the next step is -full compatibility with the existing -{class}`~napari.layers.Image` class. - -The {class}`~napari.layers.image.experimental.octree_image.OctreeImage` -class is derived from {class}`~napari.layers.Image`, while -{class}`~napari._vispy.experimental.vispy_tiled_image_layer.VispyTiledImageLayer` -is derived from {class}`~napari._vispy.vispy_image_layer.VispyImageLayer`, -and -{class}`~napari._vispy.experimental.tiled_image_visual.TiledImageVisual` is -derived from the regular Vispy `ImageVisual` class. To bring full -{class}`~napari.layers.Image` capability to -{class}`~napari.layers.image.experimental.octree_image.OctreeImage` in most -cases we just need to duplicate what those base classes are doing, but do -it on a per-tile bases. Since there is no full image for them to operate -on. This might involve chaining to the base class or it could mean -duplicating that functionality somehow in the derived class. - -Some {class}`~napari.layers.Image` functionality that needs to -be duplicated in Octree code: - -#### Contrast limits and color transforms - -The contrast limit code in Vispy's `ImageVisual` needs to be moved into -the tiled visual's -{meth}`~napari._vispy.experimental.tiled_image_visual.TiledImageVisual._build_texture`. -Instead operating on `self.data` it needs to transform tile's which are newly -being added to the visual. The color transform similarly needs to be per-tile. - -#### Blending and opacity - -It might be hard to get opacity working correctly for tiles where loads are -in progress. The way -{class}`~napari._vispy.experimental.tiled_image_visual.TiledImageVisual` -works today is the -{class}`~napari.layers.image.experimental._octree_loader.OctreeLoader` -potentially passes the visual tiles of various sizes, from different levels -of the Octree. The tiles are rendered on top of each other from largest -(coarsest level) to smallest (finest level). This is a nice trick so that -bigger tiles provide "coverage" for an area, while the smaller tiles add -detail only where that data has been loaded. - -However, this breaks blending and opacity. We draw multiple tiles on top of -each other, so the image is blending with itself. One solution which is -kind of a big change is keep -{class}`~napari._vispy.experimental.tiled_image_visual.TiledImageVisual` -for the generic "tiled" case, but introduce a new `OctreeVisual` that -knows about the Octree. It can walk up and down the Octree chopping up -larger tiles to make sure we do not render anything on top of anything -else. - -Until we do that, we could punt on making things look correct while loads -are in progress. We could even highlight the fact that a tile has not been -fully loaded (purposely making it look different until the data is fully -loaded). Aside from blending, this would address a common complaint with -tiled image viewers: you often can't tell if the data is still being -loaded. This could be a big issue for scientific uses, you don't want -people drawing the wrong conclusions from the data. - -#### Time-series multiscale - -To make time-series multiscale work should not be too hard. We just need to -correctly create a new -{class}`~napari.layers.image.experimental._octree_slice.OctreeSlice` every -time the slice changes. - -The challenge will probably be performance. For starters we probably need -to stop creating the "extra" downsampled levels, as described in {ref}`future-work-atlas-2D`. We need to make sure constructing and -tearing down the Octree is fast enough, and make sure loads for the -previous slices are canceled and everything is cleaned up. - - -(future-work-atlas-2D)= -### Future work: Extending TextureAtlas2D - -We could improve our -{class}`~napari._vispy.experimental.texture_atlas.TextureAtlas2D` class in -a number of ways: - -1. Support setting the atlas's full texture size on the fly. -2. Support setting the atlas's tile size on the fly. -3. Support a mix of tiles sizes in one atlas. -4. Allow an atlas to have more than one backing texture. - -One reason to consider these changes is so we could support "large tiles" -in certain cases. Often the coarsest level of multi-scale data "in the -wild" is much bigger than one of our (256, 256) tiles. Today we solve that -by creating additional Octree levels, downsampling the data until the -coarsest level fits within a single tile. - -If we could support multiple tiles sizes and multiple backing textures, we -could potentially have "interior tiles" which were small, but then allow -large root tiles. Graphics cards can handle pretty big textures. A layer -that's (100000, 100000) obviously needs to be broken into tiles, b¡ut a -layer that's (4096, 4096) really does not need to be broken into tiles. -That could be a single large tile. - -Long term it would be nice if we did not have to support two image classes: -{class}`~napari.layers.Image` and -{class}`~napari.layers.image.experimental.octree_image.OctreeImage`. -Maintaining two code paths and two sets of visuals will become tiresome and -lead to discrepancies and bugs. - -Instead, it would be nice if -{class}`~napari.layers.image.experimental.octree_image.OctreeImage` became -the only image class. One image class to rule them all. For that to happen, -though, we need to render small images just as efficiently as the -{class}`~napari.layers.Image` class does today. We do not want -Octree rendering to worsen cases which work well today. To keep today's -performance for smaller images we probably need to add support for variable -size tiles. - -### Future work: Level-zero-only Octrees - -In issue [#1300](https://github.com/napari/napari/issues/1300) it takes -1500ms to switch slices. There we are rendering a (16384, 16384) image that -is entirely in RAM. The delay is not from loading into RAM, it's already in -RAM, the delay is from transferring all that data to VRAM in one big gulp. - -The image is not a multi-scale image. So can we turn it into a muli-scale -image? Generally we've found downsampling to create multi-scale image -layers is slow. So the question is how can we draw this large image without -hanging? One idea is we could create an Octree that only has a level zero -and no downsampled levels. - -This is an option because chopping up a `NumPy` array into tiles is very -fast. This chopping up phase is really just creating a bunch of "views" -into the single existing array. So creating a level zero Octree should be -very fast. For there we can use our existing Octree code and our existing -{class}`~napari._vispy.experimental.vispy_tiled_image_visual.TiledImageVisual` -to transfer over one tile at a time without hurting the frame rate. - -The insight here is our Octree code is really two things, one is an Octree -but two is a tiled or chunked image, basically a flat image chopped into a -grid of tiles. How would this look to the user? With this approach -switching slices would be similar to panning and zooming a multiscale -Octree image, you'd see the new tiles loading in over time, but the -framerate would not tank, and you could switch slices at any time. - -### Future work: Caching - -Basically no work has gone into caching or memory management for Octree -data. It's very likely there are leaks and extended usage will run out of -memory. This hasn't been addressed because using Octree for long periods of -time is just now becoming possible. - -One caching issue is figuring out how to combine the `ChunkCache` with -Dasks's built-in caching. We probably want to keep the `ChunkCache` for -rendering non-Dask arrays? But when using Dask, we defer to its cache? We -certainly don't want to cache the data in both places. - -Another issue is whether to cache `OctreeChunks` or tiles in the visual, -beyond just caching the raw data. If re-creating both is fast enough, the -simpler thing is evict them fully when a chunk falls out of view. And -re-create them if it comes back in view. It's simplest to keep nothing but -what we are currently drawing. - -However if that's not fast enough, we could have a MRU cache of -`OctreeChunks` and tiles in VRAM, so that reviewing the same data is -nearly instant. This is adding complexity, but the performance might be -worth it. +These data coordinates are still continuous values that may not perfectly align with data array indices and may even fall outside of the valid range of the layer's data array. +But after clamping and rounding the coordinates, the resulting indices can be used to look-up the sub-set of the layer's data that are in view. +The exact form of this look-up depends on if the layer is an image-like layer. + +When a layer's transform state includes non-trivial rotations, slicing is limited as described in some related issues (e.g. [#2616](https://github.com/napari/napari/issues/2616)). +That's because the slicing operation is no longer selecting an axis-aligned region of the layer's data. +While there are some ideas to improve this (see [#3783](https://github.com/napari/napari/issues/3783)), there are no active efforts in development. + +### Loading layer data + +Once we have the slice indices into a layer's data, we need to load the corresponding region of data into RAM. + +### Loading array-like image data + +{class}`Image` layer data does not have a single specific type (e.g. numpy's `ndarray`). +Instead it must only have the attributes and methods defined in [`LayerDataProtocol`](https://github.com/napari/napari/blob/eab7661459e70479c7c7d587a36463f3b099b64a/napari/layers/_data_protocols.py#L51). + +Numpy's `ndarray` is compatible with this protocol, but so are array types from other packages like [Dask](https://docs.dask.org/en/latest/array.html), [Zarr](https://zarr.readthedocs.io/en/stable/_autoapi/zarr.core.Array.html), and more. +This flexibility allows you refer to image data that does not fit in memory or still needs to be lazily computed, without complicating napari's core implementation at all. + +However, it also means that simply reading image data may be slow because the data must be read from disk, downloaded across a network, or calculated from a compute graph. +This means array accesses can take an arbitrary long time to complete. + +### Loading multi-scale image data + +`Image` and `Labels` layers also support multi-scale image data, where multiple resolutions of the same image content are stored. +Similarly to regular image data, this is supported by defining a [`MultiScaleData`](https://github.com/napari/napari/blob/eab7661459e70479c7c7d587a36463f3b099b64a/napari/layers/_multiscale_data.py#L13) protocol. +As this protocol is mostly just `Sequence[LayerDataProtocol]`, this comes with the same flexibility and arbitrary load times. + +However, rendering multi-scale image data differs from regular image data because we must choose which scale or data level to load. +In order to do this, [`compute_multiscale_level`](https://github.com/napari/napari/blob/40ac1fb242d905d503aed8200099efd02ebceb95/napari/layers/utils/layer_utils.py#L532) +uses the canvas' field of view and the canvas' size in screen pixels to find the finest resolution data level that ensures that there is at least one layer data pixel per screen pixel. +As a part of these calculations, {attr}`Layer.corner_pixels` is updated to store the top-left and bottom-right corner of the canvas' field of view in the data coordinates of the currently rendered level. + +This means that whenever the canvas' camera is panned or zoomed, napari fetches all the data needed to draw the current field of view. +While this can work well with local data, it will be slow with remote or other high latency data. + +### Loading non-image data + +Other layer types, like {class}`Points` and {class}`Shapes`, have layer specific data structures. +Therefore, they also have layer specific slicing logic and associated data reads. +They also do not currently support data protocols, which makes them less flexible, but more predictable. +This may change in the future. + +### Asynchronous slicing + +Since we don't know how long an array access will take, and we never want the GUI thread to block, we should not access array-like objects in the main or GUI thread. +Instead, napari's rendering can be done _asynchronously_. +This means rendering proceeds at full speed drawing only the data which is in memory ready to be drawn, +while in the background worker threads load more data into memory to be drawn in the future. +This also allows you to continue interacting with napari while data is being fetched. + +#### Past + +Before napari v0.4, all slicing was performed on the main thread. + +From v0.4.3, two experimental implementations were introduced to perform slicing asynchronously. +These implementations could be enabled using the `NAPARI_ASYNC` and `NAPARI_OCTREE` settings. +To understand how to use these in napari v0.4, see the [associated documentation](https://napari.org/0.4.19/guides/rendering.html). + +:::{warning} +These implementations are unfinished and not well maintained, so may not work at all on some later v0.4.* versions. +::: + +#### Present + +In napari v0.5, the prior implementations were removed in favor of the approach described in [NAP-4 — Asynchronous slicing](https://napari.org/dev/naps/4-async-slicing.html) for the reasons given in that document. + +This effort is tracked by [issue #4795](https://github.com/napari/napari/issues/4795). +It is partially complete as an experimental setting that should at least work for image-like layers. +To enable the experimental setting, change it in napari's settings or preferences, +or set `NAPARI_ASYNC=1` as an environment variable before running napari. + +The key code changes push all slicing (including synchronous slicing) through a dedicated controller [`_LayerSlicer`](https://github.com/napari/napari/blob/b3a8dd22895c913d8183735f52b9d1d71c963d7f/napari/components/_layer_slicer.py#L80), +and define all the layer-specific slicing logic in dedicated callable classes (e.g. [`_ImageSliceRequest`](https://github.com/napari/napari/blob/b3a8dd22895c913d8183735f52b9d1d71c963d7f/napari/layers/image/_slice.py#L154)). +An instance of one of these callables captures all the state needed to perform slicing, +so that it can be executed asynchronously on another thread without needing to guard competing access to that state with locks. + +Unfortunately, these new additions make following the old synchronous slicing code paths more complicated. +But eventually we hope to mostly remove those complications and make both synchronous and asynchronous slicing consistent and easy enough to follow. + +#### Future + +The current experimental asynchronous slicing approach is limited. +While it prevents napari from blocking the main thread, fetching and rendering the data in view can still be slow. + +Most large image viewers improve on this experience by progressively fetching and rendering chunks or tiles of data. +This allows some data to be presented quickly rather than waiting for everything in view, +which often results in a much better user experience when fetching the data is slow. + +These efforts across all layers are generally tracked by [issue #5942](https://github.com/napari/napari/issues/5942). +Currently, the most focus is on the image layer in [issue #5561](https://github.com/napari/napari/issues/5561), +with lots of progress towards that in [PR #6043](https://github.com/napari/napari/pull/6043). + +## Drawing + +After the current view of a layer's data has been sliced into RAM, the data is pushed to VRAM with any associated transforms and parameters. + +Each napari layer type has a corresponding vispy layer type. +For example, the [`VispyImageLayer`](https://github.com/napari/napari/blob/5e8dc098cb213c5f963524e619f223ad4fe90be8/napari/_vispy/layers/base.py#L21) +corresponds to the {class}`Image` layer. +For each instance of a layer, there is a corresponding instance of its type's vispy layer. +These correspondences can be found in [`VispyCanvas.layer_to_visual`](https://github.com/napari/napari/blob/5e8dc098cb213c5f963524e619f223ad4fe90be8/napari/_vispy/canvas.py#L69). + +The vispy layer instance has a reference to its corresponding layer. +Updates to the layer's state and its current slice are handled using [napari's event system](connect-napari-event). +Of particular interest here is the [`Layer.events.set_data` event](layer-events), which is connected to the abstract method +[`VispyBaseLayer._on_data_change`](https://github.com/napari/napari/blob/5e8dc098cb213c5f963524e619f223ad4fe90be8/napari/_vispy/layers/base.py#L74). +This event is triggered when slicing is finished and the latest slice state can be read. + +Each vispy layer type is responsible for implementing the `_on_data_change` method. +The implementation of this method should read the updated state from the layer, then update the vispy layer appropriately. +In turn, vispy makes the appropriate updates to VRAM and executes any programs needed to update the display on napari's canvas. + +```{code-cell} python +:tags: [remove-cell] +viewer.close() +``` \ No newline at end of file diff --git a/docs/naps/4-async-slicing.md b/docs/naps/4-async-slicing.md index 9d1a15ac8..e94b3d0b3 100644 --- a/docs/naps/4-async-slicing.md +++ b/docs/naps/4-async-slicing.md @@ -1,6 +1,6 @@ (nap-4-async-slicing)= -# NAP-4: Asynchronous slicing +# NAP-4 — Asynchronous slicing ```{eval-rst} :Author: Andy Sweet , Jun Xi Ni, Eric Perlman, Kim Pevey diff --git a/docs/tutorials/fundamentals/viewer.md b/docs/tutorials/fundamentals/viewer.md index bd9cc0694..0210fb21b 100644 --- a/docs/tutorials/fundamentals/viewer.md +++ b/docs/tutorials/fundamentals/viewer.md @@ -280,7 +280,7 @@ You can also set the axis labels programatically as follows: ```{code-cell} python # To set new axis labels viewer.dims.axis_labels = ("label_1", "label_2") -``` +``` It is also possible to mix data of different shapes and dimensionality in different layers. If a 2D and 4D dataset are both added to the viewer then the sliders will affect only the 4D dataset, the 2D dataset will remain the same. Effectively, the two datasets are broadcast together using [NumPy broadcasting rules](https://numpy.org/doc/stable/user/basics.broadcasting.html). @@ -317,7 +317,7 @@ In this example there are three dimensions. In order to get or update the curren ```{code-cell} python # To get the current position returned as tuple of length 3 -viewer.dims.current_step +viewer.dims.current_step ``` And to change the current position of the sliders use: ```{code-cell} python @@ -421,26 +421,25 @@ viewer.camera.perspective = 45 #### Roll dimensions -The third button rolls the dimensions that are currently displayed in the viewer. -For example if you have a `ZYX` volume and are looking at the `YX` slice, this -will then show you the `ZY` slice. You can also right-click this button to pop-up -a widget that allows you to re-order the dimensions by drag-and-drop or lock a +The third button rolls the dimensions that are currently displayed in the viewer. +For example if you have a `ZYX` volume and are looking at the `YX` slice, this +will then show you the `ZY` slice. You can also right-click this button to pop-up +a widget that allows you to re-order the dimensions by drag-and-drop or lock a dimension, by clicking on the padlock icon: ![image: roll dimensions widget with padlock icons](../assets/tutorials/dims_roll_lock_widget.png){ w=200px } -Locking prevents a dimension from being rolled (reordered). This can be particularly -useful, for example, with a `3D+time` dataset where you may want to fix the time dimension, +Locking prevents a dimension from being rolled (reordered). This can be particularly +useful, for example, with a `3D+time` dataset where you may want to fix the time dimension, while being able to roll through the spatial dimensions. - -The dimension order can also be checked programatically as follows: +The dimension order can also be checked programmatically as follows: ```{code-cell} python # To get the current dimension order as tuple of int viewer.dims.order ``` -And then, changed programatically as follows: +And then, changed programmatically as follows: ```{code-cell} python # To change the current dimension order viewer.dims.order = (2, 1, 0) @@ -454,7 +453,7 @@ The fourth button transposes the displayed dimensions. #### Grid button -Then there is a grid button that toggles grid mode. When clicked it displays each layer of the image in its own tile. You can right-click this button to adjust the way the tiles are presented, such as the grid dimensions, the order of the layers in the tiles, and whether layers are overlayed in the tiles. +Then there is a grid button that toggles grid mode. When clicked it displays each layer of the image in its own tile. You can right-click this button to adjust the way the tiles are presented, such as the grid dimensions, the order of the layers in the tiles, and whether layers are overlaid in the tiles. #### Home button @@ -472,7 +471,7 @@ The right side of the status bar contains some helpful tips depending on which l ## Right-click menu A context-sensitive menu is available when you right-click on any of the layers. The type of layer determines which options are available. Note that if you have multiple layers selected, the menu actions will affect all of the selected layers. The options that are not available for a layer are greyed out. The following options are available depending on which layer type you have selected: -* **Toggle visibility** - invert the visbility state (hides or show) of selected layers: hidden layers will be shown, visibile layers will be hidden. +* **Toggle visibility** - invert the visibility state (hides or show) of selected layers: hidden layers will be shown, visible layers will be hidden. * **Show All Selected Layers** - Set all selected layers to visible. * **Hide All Selected Layers** - Set all selected layers to hidden. * **Show All Unselected Layers** - Set all *unselected* layers to visible.