Skip to content

ECCV 2018 - Where are the Blobs: Counting by Localization with Point Supervision. This is a ServiceNow Research project that was started at Element AI.

License

Notifications You must be signed in to change notification settings

ServiceNow/LCFCN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

88 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ServiceNow completed its acquisition of Element AI on January 8, 2021. All references to Element AI in the materials that are part of this project should refer to ServiceNow.

LCFCN - ECCV 2018 (Try in a Colab)

Where are the Blobs: Counting by Localization with Point Supervision

[Paper][Video]

Make the segmentation model learn to count and localize objects by adding a single line of code. Instead of applying the cross-entropy loss on dense per-pixel labels, apply the lcfcn loss on point-level annotations.

Usage

pip install git+https://github.com/ElementAI/LCFCN
from lcfcn import lcfcn_loss

# compute an CxHxW logits mask using any segmentation model
logits = seg_model.forward(images)

# compute loss given 'points' as HxW mask (1 pixel label per object)
loss = lcfcn_loss.compute_loss(points=points, probs=logits.sigmoid())

loss.backward()

Predicted Object Locations

Experiments

1. Install dependencies

pip install -r requirements.txt

This command installs pydicom and the Haven library which helps in managing the experiments.

2. Download Datasets

3. Train and Validate

python trainval.py -e trancos -d <datadir> -sb <savedir_base> -r 1
  • <datadir> is where the dataset is located.
  • <savedir_base> is where the experiment weights and results will be saved.
  • -e trancos specifies the trancos training hyper-parameters defined in exp_configs.py.

4. View Results

3.1 Launch Jupyter from terminal

> jupyter nbextension enable --py widgetsnbextension --sys-prefix
> jupyter notebook

3.2 Run the following from a Jupyter cell

from haven import haven_jupyter as hj
from haven import haven_results as hr

try:
    %load_ext google.colab.data_table
except:
    pass

# path to where the experiments got saved
savedir_base = <savedir_base>

# filter exps
filterby_list = None
# get experiments
rm = hr.ResultManager(savedir_base=savedir_base, 
                      filterby_list=filterby_list, 
                      verbose=0)
# dashboard variables
title_list = ['dataset', 'model']
y_metrics = ['val_mae']

# launch dashboard
hj.get_dashboard(rm, vars(), wide_display=True)

This script outputs the following dashboard

Citation

If you find the code useful for your research, please cite:

@inproceedings{laradji2018blobs,
  title={Where are the blobs: Counting by localization with point supervision},
  author={Laradji, Issam H and Rostamzadeh, Negar and Pinheiro, Pedro O and Vazquez, David and Schmidt, Mark},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={547--562},
  year={2018}
}

About

ECCV 2018 - Where are the Blobs: Counting by Localization with Point Supervision. This is a ServiceNow Research project that was started at Element AI.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages