Developer | Ko Sugawara |
Forum | Image.sc forum Please post feedback and questions to the forum. It is important to add the tag elephant to your posts so that we can reach you quickly. |
Source code | GitHub |
Publication | Sugawara, K., Çevrim, C. & Averof, M. Tracking cell lineages in 3D by incremental deep learning. eLife 2022. doi:10.7554/eLife.69380 |
ELEPHANT is a platform for 3D cell tracking, based on incremental and interactive deep learning.
It implements a client-server architecture. The server is built as a web application that serves deep learning-based algorithms.
The client application is implemented by extending Mastodon, providing a user interface for annotation, proofreading and visualization.
Please find below the system requirements for each module.
The latest version of ELEPHANT is distributed using Fiji.
Please install Fiji on your system and update the components using ImageJ updater.
<iframe width="560" height="640" src="https://www.youtube-nocookie.com/embed/l5Qa53m5A7Q" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>- Start Fiji.
- Run
Help > Update...
from the menu bar. - Click on the button
Manage update sites
. - Tick the checkboxes for
ELEPHANT
andMastodon
update sites.
- Click on the button
Close
in the dialogManage update sites
. - Click on the button
Apply changes
in the dialogImageJ Updater
. - Restart Fiji.
Info ℹ️ |
When there is an update, ImageJ Updater will notify you. Alternatively, you can check the updates manually by running Help > Update... . It is recommended to keep up-to-date with the latest version of ELEPHANT to follow the new features and bug fixes. |
---|
To start working with ELEPHANT, you need to prepare a Mastodon project.
Download the demo data and extract the files as below.
elephant-demo
├── elephant-demo.h5
└── elephant-demo.xml
Alternatively, you can follow the instructions here to prepare these files for your own data.
Info ℹ️ |
ELEPHANT provides a command line interface to convert image data stored in Cell Tracking Challenge style to the BDV format. |
---|
# Convert image data stored in Cell Tracking Challenge style to the BDV format
Fiji.app/ImageJ-linux64 --ij2 --headless --console --run Fiji.app/scripts/ctc2bdv.groovy "input='CTC_TIF_DIR', output='YOUR_DATA.xml', sizeX=0.32, sizeY=0.32, sizeZ=2.0, unit='µm'"
Click the new Mastodon project
button in the Mastodon launcher
window and click the browse
button on the right.
Specify the .xml
file for the dataset and click the create
button on the bottom right.
Now, you will see the main window as shown below.
To save the project, you can either run File > Save Project
, click the save
/save as...
button in the main window, or use the shortcut S
, generate a .mastodon
file.
A .mastodon
project file can be loaded by running File > Load Project
or clicking the open Mastodon project
button in the Mastodon launcher
window.
The Control Panel
is displayed by default at startup. If you cannot find it, run Plugins > ELEPHANT > Window > Control Panel
to show it.
The Control Panel
shows the statuses of the servers (ELEPHANT server and RabbitMQ server).
It also provides functions for setting up the servers.
Info ℹ️ |
The ELEPHANT server provides main functionalities (e.g. detection, linking), while the RabbitMQ server is used to send messages to the client (e.g. progress, completion). |
---|
Info ℹ️ |
DEPRECATED: Google Colab has updated its policy and restricted the use of SSH, with which ELEPHANT establish the connection. Please consider using Google Cloud instead. |
---|
Here, we will set up the servers using Google Cloud, a cloud computing services by Google. You don't need to have a high-end GPU or a Linux machine to start using ELEPHANT's deep learning capabilities. Please follow the instructions in the video below to get started.
<iframe width="560" height="640" src="https://www.youtube.com/embed/JUPIYq6jAEA" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>Info ℹ️ |
Advanced options for the server setup can be found here |
---|
Click the bdv
button in the main window.
The following window will pop up.
ELEPHANT inherits the user-friendly shortcuts from Mastodon. To follow this documentation, please find the Keymap
tab from File > Preferences...
and select the Elephant
keymap from the pull down list.
The following table summaraizes the frequently-used actions used in the BDV window.
Info ℹ️ |
If you are already familiar with Mastodon, please note that some shortcuts are modified from the default shortcuts. |
---|
Action | Shortcut |
---|---|
Move in X & Y | Right-click +mouse-drag |
Move in Z | Mouse-wheel (Press and hold Shift to move faster, Ctrl to move slower) |
Align view with XY / YZ / XZ planes |
Shift +Z (XY plane) Shift +X (YZ plane) Shift +Y (XZ plane) |
Zoom / Unzoom | Ctrl +Shift +mouse-wheel |
Next time-point | 3 |
Previous time-point | 2 |
Brightness and color dialog | P |
Save display settings | F11 |
Open a new BDV window | V |
Add a new spot | A |
Move the highlighted spot | Space +mouse-drag |
Remove the highlighted spot | D |
Navigate to the highlighted spot | W |
Increase / Decrease the radius of the highlighted spot |
E /Q (Medium step) Shift +E /Shift +Q (Coarse step) Ctrl +E /Ctrl +Q (Fine step) Alt +E /Alt +Q (Selected axis with medium step) ELEPHANT only |
Select axis | Alt +X (X axis) Alt +Y (Y axis) ELEPHANT only Alt +Z (Z axis) |
Rotate the highlighted spot | Alt +← (Counterclockwise) / Alt +→ (Clockwise) ELEPHANT only |
In the following steps, we use multiple BDV windows to visualize a 3D volume from different axes. Please open three BDV windows by clicking the bdv
button in the main window (Shortcut: V
), then rotate them to show one of XY, XZ or YZ plane using the shortcuts (Shift
+Z
, Shift
+Y
or Shift
+X
) on each window. Finally, synchronize them by clicking the key icon on top left in each BDV window.
Ellipsoids can be added and manipulated to annotate spots (e.g. nuclei) using shortcuts. Typically, the user will perform the following commands.
- Add a spot (
A
) - Increase (
E
) / Decrease (Q
) the radius of the spot - Select an axis to adjust the radius (
Alt
+X
/Alt
+Y
/Alt
+Z
) - Increase (
Alt
+E
) / Decrease (Alt
+Q
) the radius of the spot along the selected axis (a sphere becomes an ellipsoid) - Select an axis to rotate (
Alt
+X
/Alt
+Y
/Alt
+Z
) - Rotate the spot along the selected axis counterclockwise (
Alt
+←
) or clockwise (Alt
+→
)
Please put all BDV windows in the same group by clicking the key icon on top left in the window to synchronize them.
Spots are colored using the Detection coloring mode by default.
You can change the coloring mode from View > Coloring
.
Status | Color |
---|---|
Accepted | Cyan |
Rejected | Magenta |
Unevaluated | Green |
In the training module, the Accepted
annotations are converted to foreground labels, and Rejected
annotations are to background labels. Background labels can also be generated by an intensity thresholding, where the threshold is specified by the auto BG threshold
parameter in the Preferences dialog.
ELEPHANT provides the following shortcut keys for annotating spots.
Action | Shortcut |
---|---|
Accepte | 4 |
Reject | 5 |
Info ℹ️ |
For advanced users: Please check here |
---|
Links can be added in the following four ways on the BDV window; pressing down one of the following keys (keydown) will move you automatically to the next or the previous timepoint (depending on the command, see below):
- Keydown
A
on the highlighted spot, then keyup at the position where you want to add a linked spot in the next timepoint. - Keydown
L
on the highlighted spot, then keyup on the target annotated spot in the next timepoint. - Keydown
Shift
+A
on the highlighted spot, then key up at the position you want to add a linked spot in the previous timepoint. - Keydown
Shift
+L
on the highlighted spot, then keyup on the target annotated spot in the previous timepoint.
Spots and links that are added manually in this fashion are automatically tagged as Approved
in the Tracking
tag set.
To visualize the Tracking
tag, set the coloring mode to Tracking
by View > Coloring > Tracking
.
Info ℹ️ |
In the following workflow, please put all relevant BDV windows in the group 1 by clicking the key icon on top left in the window. |
---|
Open a Preferences dialog Plugins > ELEPHANT > Preferences...
.
Change the dataset name to elephant-demo
(or the name you specified for your dataset).
When you change some value in the settings, a new setting profile is created automatically.
Info ℹ️ |
The profile name can be renamed. |
---|
Press the Apply
button on bottom right and close the settings dialog (OK
or x
button on top right).
Please check the settings parameters table for detailed descriptions about the parameters.
First, you need to initialize a model by Plugins > ELEPHANT > Detection > Reset Detection Model
.
This command creates a new model parameter file with the name you specified in the settings (detection.pth
by default) in the workspace/models/
directory, which lies in the directory you launched on the server.
There are three options for initialization:
Versatile
: initialize a model with a versatile pre-trained modelDefault
: initialize a model with intensity-based self-supervised trainingFrom File
: initialize a model from a local fileFrom URL
: initialize a model from a specified URL
Info ℹ️ |
If a specified model is not initialized before prediction or training, ELEPHANT automatically initialize it with a versatile pre-trained model. |
---|
After initialization of a model you can try a prediction. Plugins > ELEPHANT > Detection > Predict Spots
Info ℹ️ |
Alt +S is a shortcut for prediction |
---|
Based on the prediction results, you can add annotations as described earlier.
Train the model by Plugins > ELEPHANT > Detection > Train Selected Timpepoints
.
Predictions with the updated model should yield better results.
In general, a batch mode is used for training with relatively large amounts of data. For more interactive training, please use the live mode explained below.
In live mode, you can iterate the cycles of annotation, training, prediction and proofreading more frequently.
Start live mode by Plugins > ELEPHANT > Detection > Live Training
.
During live mode, you can find the text "live mode" on top of the BDV view.
Every time you update the labels (Shortcut: U
), a new training epoch will start, with the latest labels in the current timepoint.
To perform prediction with the latest model, the user needs to run Predict Spots
(Shortcut: Alt
+S
) after the model is updated.
The model parameter files are located under workspace/models
on the server.
If you are using your local machine as a server, these files should remain unless you do not delete them explicitly.
If you are using Google Colab, you may need to save them before terminating the session.
You can download the model parameter file by running Plugins > ELEPHANT > Detection > Download Detection Model
or Plugins > ELEPHANT > Linking > Download Flow Model
.
Alternatively, you can make it persistent on your Google drive by uncommenting the first code cell in the Colab notebook.
You can find pretrained parameter files used in the paper.
The model parameter file can be specified in the Preferences
dialog, where the file path is relative to /workspace/models/
on the server.
There are three ways to import the pre-trained model parameters:
- Run
Plugins > ELEPHANT > Detection > Reset Detection Model
and select theFrom File
option with the local file.
2. Upload the pre-trained parameters file to the website that provides a public download URL (e.g. GitHub, Google Drive, Dropbox). Run Plugins > ELEPHANT > Detection > Reset Detection Model
and select the From URL
option with the download URL.
3. Directly place/replace the file at the specified file path on the server.
Info ℹ️ |
In the following workflow, please put all relevant BDV windows in the group 1 by clicking the key icon on top left in the window. |
---|
Here, we will load a project from a .masotodon project that contains spots data. Please place the file in the same folder as the BDV files (.h5
and .xml
) are in.
elephant-demo
├── elephant-demo.h5
├── elephant-demo.xml
└── elephant-demo-with-spots.mastodon
Alternatively, you can complete detection by yourself.
We start with a linking using the nearest neighbor algorithm without flow support.
Please confirm that the use optical flow for linking
option is off
.
For other settings, please check the detailed descriptions.
In the demo dataset, please go to the last timepoint (t = 9).
Run the nearest neighbor linking action by Alt
+L
or Plugins > ELEPHANT > Linking > Nearest Neighbor Linking
.
To open the TrackScheme window, click the button "trackscheme" in the Mastodon main window.
In the following steps, please set the coloring mode to Tracking
by View > Coloring > Tracking
in the TrackScheme window and the BigDataViewer window.
Please turn on the use optical flow for linking
option.
Please initialize the flow model with the Versatile
option Plugins > ELEPHANT > Linking > Reset Flow Model
.
Run the nearest neighbor linking action by Alt
+L
or Plugins > ELEPHANT > Linking > Nearest Neighbor Linking
.
Using both the BDV window and the trackscheme, you can remove/add/modify spots and links to build a complete lineage tree.
Once you finish proofreading of a track (or a tracklet), you can tag it as Approved
in the Tracking
tag set.
Select all spots and links in the track by Shift
+Space
, and Edit > Tags > Tracking > Approved
. There is a set of shortcuts to perform tagging efficiently. The shortcut Y
pops up a small window on top left in the TrackScheme window, where you can select the tag set, followed by the tag by pressing a number in the list. For example, Edit > Tags > Tracking > Approved
corresponds to the set of shortcuts [Y
> 2
> 1
].
Spots and links tagged with Approved
will remain unless users do not reomove them explicitly (the unlabeled
links will be removed at the start of the next prediction). The Approved
links will also be used for training of a flow model.
Info ℹ️ |
If you cannot see the colors as in the video, please make sure that you set the coloring mode to Tracking by View > Coloring > Tracking in the TrackScheme window and the BigDataViewer window. |
---|
Once you collect certain amount of link annotations, a flow model can be trained with them by Plugins > ELEPHANT > Linking > Train Optical Flow
.
Currently, there is only a batch mode for training of a flow model, which works with the annotations in the time range specified in the settings.
If you start training from scratch, it will take relatively long time to get the flow model to converge. Alternatively, you can incrementally train a model, starting with the pretrained model parameters.
Category | Action | On Menu | Shortcut | Description | |
---|---|---|---|---|---|
Detection | Predict Spots | Yes | Alt +S |
Predict spots with the specified model and parameters | |
Predict Spots Around Mouse | No | Alt +Shift |
Predict spots around the mouse position on the BDV view | ||
Update Detection Labels | Yes | U |
Predict spots | ||
Reset Detection Labels | Yes | Not available | Reset detection labels | ||
Start Live Training | Yes | Not available | Start live training | ||
Train Detection Model (Selected Timepoints) | Yes | Not available | Train a detection model with the annotated data from the specified timepoints | ||
Train Detection Model (All Timepoints) | Yes | Not available | Train a detection model with the annotated data from all timepoints | ||
Reset Detection Model | Yes | Not available | Reset a detection model by one of the following modes: `Versatile`, `Default`, `From File` or `From URL` | ||
Download Detection Model | Yes | Not available | Download a detection model parameter file. | ||
Linking | Nearest Neighbor Linking | Yes | Alt +L |
Perform nearest neighbor linking with the specified model and parameters | |
Nearest Neighbor Linking Around Mouse | No | Alt +Shift +L |
Perform nearest neighbor linking around the mouse position on the BDV view | ||
Update Flow Labels | Yes | Not available | Update flow labels | ||
Reset Flow Labels | Yes | Not available | Reset flow labels | ||
Train Flow Model (Selected Timepoints) | Yes | Not available | Train a flow model with the annotated data from the specified timepoints | ||
Reset Flow Model | Yes | Not available | Reset a flow model by one of the following modes: `Versatile`, `Default`, `From File` or `From URL` | ||
Download Flow Model | Yes | Not available | Download a flow model parameter file. | ||
Utils | Map Spot Tag | Yes | Not available | Map a spot tag to another spot tag | |
Map Link Tag | Yes | Not available | Map a link tag to another link tag | ||
Remove All Spots and Links | Yes | Not available | Remove all spots and links | ||
Remove Short Tracks | Yes | Not available | Remove spots and links in the tracks that are shorter than the specified length | ||
Remove Spots by Tag | Yes | Not available | Remove spots with the specified tag | ||
Remove Links by Tag | Yes | Not available | Remove spots with the specified tag | ||
Remove Visible Spots | Yes | Not available | Remove spots in the current visible area for the specified timepoints | ||
Remove Self Links | Yes | Not available | Remove accidentally generated links that connect the identical spots | ||
Take a Snapshot | Yes | H |
Take a snapshot in the specified BDV window | ||
Take a Snapshot Movie | Yes | Not available | Take a snapshot movie in the specified BDV window | ||
Import Mastodon | Yes | Not available | Import spots and links from a .mastodon file. |
||
Export CTC | Yes | Not available | Export tracking results in a Cell Tracking Challenge format. The tracks whose root spots are tagged with Completed are exported. |
||
Change Detection Tag Set Colors | Yes | Not available | Change Detection tag set colors (Basic or Advanced ) |
||
Analysis | Tag Progenitors | Yes | Not available | Assign the Progenitor tags to the tracks whose root spots are tagged with Completed . Tags are automatically assigned starting from 1 . Currently, this action supports maximum 255 tags. |
|
Tag Progenitors | Yes | Not available | Label the tracks with the following rule.
|
||
Tag Dividing Cells | Yes | Not available | Tag the dividing and divided spots in the tracks as below.
|
||
Count Divisions (Entire) | Yes | Not available | Count number of divisions in a lineage tree and output as a .csv file. In the Entire mode, a total number of divisions per timepoint is calculated.In the Trackwise mode, a trackwise number of divisions per timepoint is calculated. |
||
Count Divisions (Trackwise) | |||||
Window | Client Log | Yes | Not available | Show a client log window | |
Server Log | Yes | Not available | Show a server log window | ||
Control Panel | Yes | Not available | Show a control panel window | ||
Abort Processing | Yes | Ctrl +C |
Abort the current processing | ||
Preferences... | Yes | Not available | Open a preferences dialog |
The tagging function of Mastodon can provide specific information on each spot. ELEPHANT uses the Tag information for processing.
In the detection workflow, the Detection tag set is used (See below for all provided tag sets available on ELEPHANT).
There are two color modes in the Detection tag set, Basic
and Advanced
.
You can switch between them from Plugins > ELEPHANT > Utils > Change Detection Tag Set Colors
.
Predicted spots and manually added spots are tagged by default as unlabeled
and fn
, respectively.
These tags are used for training, where true spots and false spots can have different weights for training.
Highlighted spots can be tagged with one of the Detection
tags using the shortcuts shown below.
Tag | Shortcut |
---|---|
tp | 4 |
fp | 5 |
tn | 6 |
fn | 7 |
tb | 8 |
fb | 9 |
unlabeled | 0 |
By default, ELEPHANT generates and uses the following tag sets.
Tag set | Tag | Color | Description | |
---|---|---|---|---|
Detection | tp | ■ cyan | Annotate spots for training and prediction in a detection workflow |
true positive; generates nucleus center and nucleus periphery labels |
fp | ■ magenta | false positive; generates background labels with a false weight | ||
tn | ■ red | true negative; generates background labels | ||
fn | ■ yellow | false negative; generates nucleus center and nucleus periphery labels with a false weight | ||
tb | ■ orange | true border; generates nucleus periphery labels | ||
fb | ■ pink | false border; generates nucleus periphery labels with a false weight | ||
unlabeled | ■ green | unevaluated; not used for labels | ||
Tracking | Approved | ■ cyan | Annotate links for training and prediction in a linking workflow |
approved; generates flow labels |
unlabeled | ■ green | unevaluated; not used for flow labels | ||
Progenitor | 1-255 | ■ glasbey | Visualize progenitors | assigned by an anlysis plugin or manually by a user |
unlabeled | ■ invisible | not assigned; invisible on the view | ||
Status | Completed | ■ cyan | Label status of tracks | completed tracks |
Division | Dividing | ■ cyan | Annotate division status of spots |
spots about to divide |
Divided | ■ yellow | spots just divided | ||
Non-dividing | ■ magenta | othe positive spots | ||
Invisible | ■ invisible | negative spots are invisible | ||
Proliferator | Proliferator | ■ cyan | Annotate proliferation status of spots |
spots in the proliferating lineage tree |
Non-proliferator | ■ magenta | spots in the non-proliferating lineage tree | ||
Invisible | ■ invisible | undetermined spots are invisible |
Category | Parameter | Description |
---|---|---|
Basic Settings | prediction with patches | If checked, prediction is performed on the patches with the size specified below. |
patch size x | Patch size for x axis. If the prediction with patches is unchecked, it is not used. |
|
patch size y | Patch size for y axis. If the prediction with patches is unchecked, it is not used. |
|
patch size z | Patch size for z axis. If the prediction with patches is unchecked, it is not used. |
|
number of crops | Number of crops per timepoint per epoch used for training. | |
number of epochs | Number of epochs for batch training. Ignored in live mode. | |
time range | Time range (backward) for prediction and batch training. For example, if the current time point is `10` and the specified time range is `5`, time points `[6, 7, 8, 9, 10]` are used for prediction and batch training. | |
auto BG threshold | Voxels with the normalized value below auto BG threshold are considered as *BG* in the label generation step. |
|
learning rate | Learning rate for training. | |
probablility threshold | Voxels with a center probability greater than this threshold are treated as the center of the ellipsoid in detection. | |
suppression distance | If the predicted spot has an existing spot (either TP , FN or unlabeled ) within this value, one of the spots is suppressed.If the existing spot is TP or FN , the predicted spot is suppressed.If the existing spot is unlabeled , the smaller of the two spots is suppressed. |
|
min radius | If one of the radii of the predicted spot is smaller than this value, the spot will be discarded. | |
max radius | Radii of the predicted spot is clamped to this value. | |
NN linking threshold | In the linking workflow, the length of the link should be smaller than this value. If the optical flow is used in linking, the length is calculated as the distance based on the flow-corrected position of the spot. This value is referred to as d_serch in the paper. |
|
NN max edges | This value determines the number of links allowed to be created in the linking Workflow. | |
use optical flow for linking | If checked, optical flow estimation is used to support nearest neighbor (NN) linking. | |
use interpolation for linking | If checked, the missing spots in the link are interpolated, which happens when 1 < NN search neighbors . |
|
dataset dir | The path of the dataset dir stored on the server. The path is relative to /workspace/models/ on the server. |
|
detection model file | The path of the [state_dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict) file for the detection model stored on the server. The path is relative to /workspace/models/ on the server. |
|
flow model file | The path of the [state_dict](https://pytorch.org/tutorials/beginner/saving_loading_models.html#what-is-a-state-dict) file for the flow model stored on the server. The path is relative to /workspace/models/ on the server. |
|
detection Tensorboard log dir | The path of the Tensorboard log dir for the detection model stored on the server. The path is relative to /workspace/logs/ on the server. |
|
flow Tensorboard log dir | The path of the Tensorboard log dir for the flow model stored on the server. The path is relative to /workspace/logs/ on the server. |
|
Advanced Settings | output prediction | If checked, prediction output is save as .zarr on the server for the further inspection. |
apply slice-wise median correction | If checked, the slice-wise median value is shifted to the volume-wise median value. It cancels the uneven slice wise intensity distribution. |
|
mitigate edge discontinuities | If checked, discontinuities found in the edge regions of the prediction are mitigated. The required memory size will increase slightly. |
|
rescale x | rescale the image data in x axis with this value. | |
rescale y | rescale the image data in y axis with this value. | |
rescale z | rescale the image data in z axis with this value. | |
training crop size x | Training crop size for x axis. The smaller of this parameter and the x dimension of the image is used for the actual crop size x. | |
training crop size y | Training crop size for y axis. The smaller of this parameter and the y dimension of the image is used for the actual crop size y. | |
training crop size z | Training crop size for z axis. The smaller of this parameter and the z dimension of the image is used for the actual crop size z. | |
batch size | Batch size for training and prediction. | |
class weight background | Class weight for background in the loss function for the detection model. | |
class weight border | Class weight for border in the loss function for the detection model. | |
class weight center | Class weight for center in the loss function for the detection model. | |
flow dim weight x | Weight for x dimension in the loss function for the flow model. | |
flow dim weight y | Weight for y dimension in the loss function for the flow model. | |
flow dim weight z | Weight for z dimension in the loss function for the flow model. | |
false weight | Labels generated from false annotations (FN , FP , FB ) are weighted with this value in loss calculation during training (relative to true annotations). |
|
center ratio | Center ratio of the ellipsoid used in label generation and detection. | |
max displacement | Maximum displacement that would be predicted with the flow model. This value is used to scale the output from the flow model. Training and prediction should use the same value for this parameter. If you want to transfer the flow model to another dataset, this value should be kept. |
|
augmentation scale factor base | In training, the image volume is scaled randomly based on this value. e.g. If this value is 0.2, the scaling factors for three axes are randomly picke up from the range [0.8, 1.2]. |
|
augmentation rotation angle | In training, the XY plane is randomly rotated based on this value. The unit is degree. e.g. If this value is 30, the rotation angle is randomly picked up from the range [-30, 30]. |
|
augmentation contrast | In training, the contrast is modified randomly based on this value. e.g. If this value is 0.2, the contrast is randomly picked up from the range [0.8, 1]. |
|
NN search depth | This value determines how many timepoints the algorithm searches for the parent spot in the linking workflow. | |
NN search neighbors | This value determines how many neighbors are considered as candidates for the parent spot in the linking workflow. | |
Training log intervals | This value specifies how frequently the logging takes place in training. | |
Cache maximum bytes (MiB) | This value specifies the memory size to be used for caching. Caching enables faster data loading. | |
use memmap | This value specifies if a memory-map is enabled in data loading. Memory-map enables memory-efficient data loading. Memory-mapped files are stored in workspace/memmaps which can grow large as these files accumulate. The user can delete these files when they are not needed. |
|
log file basename | This value specifies log file basename.~/.mastodon/logs/client_BASENAME.log , ~/.mastodon/logs/server_BASENAME.log will be created and used as log files. |
|
Server Settings | ELEPHANT server URL with port number | URL for the ELEPHANT server. It should include the port number (e.g. http://localhost:8080 ) |
RabbitMQ server port | Port number of the RabbitMQ server. | |
RabbitMQ server host name | Host name of the RabbitMQ server. (e.g. localhost ) |
|
RabbitMQ server username | Username for the RabbitMQ server. | |
RabbitMQ server password | Password for the RabbitMQ server. |
Info ℹ️ |
ELEPHANT provides options for faster and memory-efficient data loading. Caching mechanism directly store the loaded data in RAM using the Least Recently Used storategy. The user can specify the maximum size of RAM for caching in the preferences dialog. Memory-map is another optional layer in data loading. Memory-map stores an array in a binary format and accesses data chunks only when they are required, enabling a meomory-efficient data handling. The above two options can be used together. |
---|
Requirements | |
---|---|
Operating System | Linux-based OS compatible with NVIDIA Container Toolkit |
Docker | Docker with NVIDIA Container Toolkit (see supported versions) |
GPU | NVIDIA CUDA GPU with sufficient VRAM for your data (recommended: 11 GB or higher) |
Storage | Sufficient size for your data (recommended: 1 TB or higher) |
Requirements | |
---|---|
Operating System | Linux-based OS |
Singularity | Singularity (see requirements for NVIDIA GPUs & CUDA) |
GPU | NVIDIA CUDA GPU with sufficient VRAM for your data (recommended: 11 GB or higher) |
Storage | Sufficient size for your data (recommended: 1 TB or higher) |
Info ℹ️ |
The total amount of data can be 10-30 times larger than the original data size when the prediction outputs (optional) are generated. |
---|
Requirements | |
---|---|
Operating System | Linux, Mac or Windows OS |
Java | Java Runtime Environment 8 or higher |
Storage | Sufficient size for your data (Please consider using BigDataServer for the huge data) |
The ELEPHANT client uses the same type of image files as Mastodon. The image data are imported as a pair of HDF5 (.h5
) and XML (.xml
)
files from BigDataViewer (BDV).
The ELEPHANT server stores image, annotation and prediction data in Zarr (.zarr
) format.
Data type | Client | Server |
---|---|---|
Image | HDF5 (.h5 ) |
Zarr (.zarr ) |
Image metadata | XML (.xml ) |
Not available |
Annotation | Mastodon project (.mastodon ) |
Zarr (.zarr ) |
Prediction | Mastodon project (.mastodon ) |
Zarr (.zarr ) |
Project metadata | Mastodon project (.mastodon ) |
Not available |
Viewer settings (Optional) | XML (.xml ) |
Not available |
There are three options to set up the ELEPHANT server.
-
This option is recommended if you have a powerful computer that satisfies the server requirements (Docker) with root privileges.
-
This option is recommended if you can access a powerful computer that satisfies the server requirements (Singularity) as a non-root user (e.g. HPC cluster).
-
Alternatively, you can set up the ELEPHANT server with Google Cloud, a cloud computing services by Google. In this option, you don't need to have a high-end GPU or a Linux machine to start using ELEPHANT's deep learning capabilities.
Info ℹ️ |
DEPRECATED: Google Colab has updated its policy and restricted the use of SSH, with which ELEPHANT establish the connection. Please consider using Google Cloud instead. |
---|
Please check that your computer meets the server requirements.
Install Docker with NVIDIA Container Toolkit.
By defaut, ELEPHANT assumes you can run Docker as a non-root user.
If you need to run Docker
with sudo
, please set the environment variable ELEPHANT_DOCKER
as below.
export ELEPHANT_DOCKER="sudo docker"
Alternatively, you can set it at runtime.
make ELEPHANT_DOCKER="sudo docker" bash
Download and extract the latest release of the ELEPHANT server here.
Alternatively, you can clone a repository from GitHub.
git clone https://github.com/elephant-track/elephant-server.git
First, change the directory to the project root.
cd elephant-server
The following command will build a Docker image that integrates all the required modules.
make build
Info ℹ️ |
In the latest version, this step can be done automatically and you do not need to do it manually if there is no particular reason. |
---|
Please prepare your image data, producing a pair of BigDataViewer .h5
and .xml
files, or download the demo data and extract it as below.
The ELEPHANT server deals with images using Zarr. The following command generates required zarr
files from the BigDataViewer .h5
file.
workspace
├── datasets
│ └── elephant-demo
│ ├── elephant-demo.h5
│ └── elephant-demo.xml
Run the script inside a Docker container.
make bash # run bash inside a docker container
python /opt/elephant/script/dataset_generator.py --uint16 /workspace/datasets/elephant-demo/elephant-demo.h5 /workspace/datasets/elephant-demo
# usage: dataset_generator.py [-h] [--uint16] [--divisor DIVISOR] input output
# positional arguments:
# input input .h5 file
# output output directory
# optional arguments:
# -h, --help show this help message and exit
# --uint16 with this flag, the original image will be stored with
# uint16
# default: False (uint8)
# --divisor DIVISOR divide the original pixel values by this value (with
# uint8, the values should be scale-downed to 0-255)
exit # exit from a docker container
You will find the following results.
workspace
├── datasets
│ └── elephant-demo
│ ├── elephant-demo.h5
│ ├── elephant-demo.xml
│ ├── flow_hashes.zarr
│ ├── flow_labels.zarr
│ ├── flow_outputs.zarr
│ ├── imgs.zarr
│ ├── seg_labels_vis.zarr
│ ├── seg_labels.zarr
│ └── seg_outputs.zarr
Info ℹ️ |
By default, the docker container is launched with volumes, mapping the local workspace/ directory to the /workspace/ directory in the container. The local workspace directory can be set by the ELEPHANT_WORKSPACE environment variable (Default: ${PWD}/workspace ). |
---|
# This is optional
export ELEPHANT_WORKSPACE="YOUR_WORKSPACE_DIR"
make bash
# This is optional
make ELEPHANT_WORKSPACE="YOUR_WORKSPACE_DIR" bash
Info ℹ️ |
Multi-view data is not supported by ELEPHANT. You need to create a fused data (e.g. with BigStitcher Fuse) before converting to .zarr . |
---|
The ELEPHANT server is accompanied by several services, including Flask,
uWSGI, NGINX, redis
and RabbitMQ.
These services are organized by Supervisord inside the Docker container,
exposing the port 8080
for NGINX and 5672
for RabbitMQ available on localhost
.
make launch # launch the services
Now, the ELEPHANT server is ready.
Singularity >= 3.6.0
is required. Please check the version of Singularity on your system.
singularity --version
Download and extract the latest release of the ELEPHANT server here.
Alternatively, you can clone a repository from GitHub.
Run the following command at the project root directory where you can find a elephant.def
file.
The following command build a singularity container (delephant.sif
) and copies /var/lib/
, /var/log/
and /var/run/
in the container to $HOME/.elephant_binds
on the host.
make singularity-build
The command below starts an instance
(see details) named elephant
using the image written in elephant.sif
.
Please set the environment variable ELEPHANT_WORKSPACE
to the workspace
directory on your system.
make singularity-launch
Please specify the environment variable ELEPHANT_GPU
if you want to use a specific GPU device on your system (default: all
).
ELEPHANT_GPU=0 make singularity-launch
At this point, you will be able to work with the ELEPHANT server. Please follow the instructions for seting up the remote connection.
After exiting the exec
by Ctrl+C
, please do not forget to stop the instance
.
make singularity-stop
The ELEPHANT server can be accessed remotely by exposing the ports for NGINX
(8080
by default) and RabbitMQ
(5672
by default) .
To establish connections to the server, one option would be to use SSH portforwarding.
You can use the Control Panel
to establish these connections. Please set the parameters and press the Add Port Forward
button.
Alternatively, you can use CLI. Assuming that you can access to the computer that launches the ELEPHANT server by ssh USERNAME@HOSTNAME
(or ssh.exe USERNAME@HOSTNAME
on Windows), you can forward the ports for ELEPHANT as below.
ssh -N -L 8080:localhost:8080 USERNAME@HOSTNAME # NGINX
ssh -N -L 5672:localhost:5672 USERNAME@HOSTNAME # RabbitMQ
ssh.exe -N -L 8080:localhost:8080 USERNAME@HOSTNAME # NGINX
ssh.exe -N -L 5672:localhost:5672 USERNAME@HOSTNAME # RabbitMQ
After establishing these connections, the ELEPHANT client can communicate with the ELEPHANT server just as launched on localhost.
This research was supported by the European Research Council, under the European Union Horizon 2020 programme, grant ERC-2015-AdG #694918. The software is developed in Institut de Génomique Fonctionnelle de Lyon (IGFL) / Centre national de la recherche scientifique (CNRS).
- ELEPHANT client
- ELEPHANT server
- ELEPHANT docs
and other great projects.
Please post feedback and questions to the Image.sc forum.
It is important to add the tag elephant
to your posts so that we can reach you quickly.
Please cite our paper on eLife.
- Sugawara, K., Çevrim, C. & Averof, M. Tracking cell lineages in 3D by incremental deep learning. eLife 2022. doi:10.7554/eLife.69380
@article {Sugawara2022,
author = {Sugawara, Ko and {\c{C}}evrim, {\c{C}}a?r? and Averof, Michalis},
title = {Tracking cell lineages in 3D by incremental deep learning},
year = {2022},
doi = {10.7554/eLife.69380},
abstract = {Deep learning is emerging as a powerful approach for bioimage analysis. Its use in cell tracking is limited by the scarcity of annotated data for the training of deep-learning models. Moreover, annotation, training, prediction, and proofreading currently lack a unified user interface. We present ELEPHANT, an interactive platform for 3D cell tracking that addresses these challenges by taking an incremental approach to deep learning. ELEPHANT provides an interface that seamlessly integrates cell track annotation, deep learning, prediction, and proofreading. This enables users to implement cycles of incremental learning starting from a few annotated nuclei. Successive prediction-validation cycles enrich the training data, leading to rapid improvements in tracking performance. We test the software's performance against state-of-the-art methods and track lineages spanning the entire course of leg regeneration in a crustacean over 1 week (504 time-points). ELEPHANT yields accurate, fully-validated cell lineages with a modest investment in time and effort.},
URL = {https://doi.org/10.7554/eLife.69380},
journal = {eLife}
}