Skip to content

Commit

Permalink
Merge branch 'main' into 240-support-of-automated-release-process
Browse files Browse the repository at this point in the history
  • Loading branch information
ioangatop committed Mar 19, 2024
2 parents 7b501a8 + 9d7b357 commit 3cb86ce
Show file tree
Hide file tree
Showing 7 changed files with 13 additions and 13 deletions.
File renamed without changes.
2 changes: 1 addition & 1 deletion docs/datasets/mhist.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ Please create a root folder, e.g. `mhist`, and download all the files there, whi
We work with the splits provided by the data source. Since no "validation" split is provided, we use the "test" split as validation split.

- Train split: `annotations.csv` :: "Partition" == "train"
- Valdation split: `annotations.csv` :: "Partition" == "test"
- Validation split: `annotations.csv` :: "Partition" == "test"

| Splits | Train | Validation |
|----------|-----------------|--------------|
Expand Down
2 changes: 1 addition & 1 deletion docs/datasets/total_segmentator.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# TotalSegmentator

The TotalSegmentator dataset is an radiology image-segmentation dataset with 1228 3D images and corresponding masks with 117 different anatomical structures. It can be used for segmentation and multilabel classification tasks.
The TotalSegmentator dataset is a radiology image-segmentation dataset with 1228 3D images and corresponding masks with 117 different anatomical structures. It can be used for segmentation and multilabel classification tasks.

## Raw data

Expand Down
4 changes: 2 additions & 2 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ For more details on the FM-backbones and instructions to replicate the results,

*Note that the current version of eva implements the task- & model-independent and fixed default set up following the standard evaluation protocol proposed by [1] and described in the table below. We selected this approach to prioritize reliable, robust and fair FM-evaluation while being in line with common literature. Additionally, with future versions we are planning to allow the use of cross-validation and hyper-parameter tuning to find the optimal setup to achieve best possible performance on the implemented downstream tasks.*

With the FM as input, *eva* computes embeddings for all WSI patches which are then used as input to train a downstream head consisting of a single linear layer in a supervised setup for each of the benchmark datasets. We use early stopping with a patience of 5% of the maximal number of epochs.
With a provided FM, *eva* computes embeddings for all input images (WSI patches) which are then used to train a downstream head consisting of a single linear layer in a supervised setup for each of the benchmark datasets. We use early stopping with a patience of 5% of the maximal number of epochs.

| | |
|-------------------------|---------------------------|
Expand All @@ -104,7 +104,7 @@ With the FM as input, *eva* computes embeddings for all WSI patches which are th
| **Base learning rate** | 0.01 |
| **Learning Rate** | [Base learning rate] * [Batch size] / [Base batch size] |
| **Max epochs** | [Number of samples] * [Number of steps] / [Batch size] |
| **Early stopping** | 10% * [Max epochs] |
| **Early stopping** | 5% * [Max epochs] |
| **Optimizer** | SGD |
| **Momentum** | 0.9 |
| **Weight Decay** | 0.0 |
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/advanced/replicate_evaluations.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Replicate evaluations

To produce the evaluation results presented [here](../index.md), you can run *eva* with the settings below.
To produce the evaluation results presented [here](../../index.md#evaluation-results), you can run *eva* with the settings below.

Make sure to replace `<task>` in the commands below with `bach`, `crc`, `mhist` or `patch_camelyon`.

Expand Down
12 changes: 6 additions & 6 deletions docs/user-guide/tutorials/evaluate_resnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

If you read [How to use eva](../getting-started/how_to_use.md) and followed the Tutorials to this point, you might ask yourself why you would not always use the *offline* workflow to run a complete evaluation. An *offline*-run stores the computed embeddings and runs faster than the *online*-workflow which computes a backbone-forward pass in every epoch.

One use case for the *online*-workflow is the evaluation of a supervised ML model that does not rely on an backbone/head architecture. To demonstrate this, lets train a ResNet 18 from [Pytoch Image Models (timm)](https://timm.fast.ai/).
One use case for the *online*-workflow is the evaluation of a supervised ML model that does not rely on an backbone/head architecture. To demonstrate this, let's train a ResNet 18 from [Pytoch Image Models (timm)](https://timm.fast.ai/).

To do this we need to create a new config-file:

Expand All @@ -21,18 +21,18 @@ Now let's adapt the new `bach.yaml`-config to the new model:
path: timm.create_model
arguments:
model_name: resnet18
num_classes: &NUM_CLASSES 2
num_classes: &NUM_CLASSES 4
drop_rate: 0.0
pretrained: false
```
To reduce training time, lets overwrite some of the default parameters. In the terminal where you run ***eva***, set:
To reduce training time, let's overwrite some of the default parameters. In the terminal where you run *eva*, set:
```
export OUTPUT_ROOT=logs/resnet/bach
export MAX_STEPS=20
export LR_VALUE=0.1
export MAX_STEPS=50
export LR_VALUE=0.01
```
Now train and evaluate the model by running:
```
eva fit --config configs/vision/resnet18/bach.yaml
```
Once the run is complete, take a look at the results in `logs/resnet/bach/<session-id>/results.json`. How does the performance compare to the results observed in the previous tutorials?
Once the run is complete, take a look at the results in `logs/resnet/bach/<session-id>/results.json` and check out the tensorboard with `tensorboard --logdir logs/resnet/bach`. How does the performance compare to the results observed in the previous tutorials?
4 changes: 2 additions & 2 deletions docs/user-guide/tutorials/offline_vs_online.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ If you have not yet downloaded the BACH data to your machine, open `configs/visi

### 1. Compute the embeddings

First, lets use the `predict`-command to download the data and compute embeddings. In this example we use a randomly initialized `dino_vits16` as backbone.
First, let's use the `predict`-command to download the data and compute embeddings. In this example we use a randomly initialized `dino_vits16` as backbone.

Open a terminal in the folder where you installed *eva* and run:
```
Expand All @@ -37,7 +37,7 @@ Once the session is complete, verify that:

Now we can use the `fit`-command to evaluate the FM on the precomputed embeddings.

To ensure a quick run for the purpose of this exercise, lets overwrite some of the default parameters. In the terminal where you run *eva*, set:
To ensure a quick run for the purpose of this exercise, let's overwrite some of the default parameters. In the terminal where you run *eva*, set:
```
export MAX_STEPS=20
export LR_VALUE=0.1
Expand Down

0 comments on commit 3cb86ce

Please sign in to comment.