Skip to content

Room classification network training and inference code

License

Notifications You must be signed in to change notification settings

MIT-SPARK/Hydra-GNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repository contains code to train room-classification networks using 3D scene graphs as input. It is based on the papers:

If you find this code relevant for your work, please consider citing one or both of these papers. Bibtex entries are provided below:

@inproceedings{talak2021neuraltree,
               author = {Talak, Rajat and Hu, Siyi and Peng, Lisa and Carlone, Luca},
               booktitle = {Advances in Neural Information Processing Systems},
               title = {Neural Trees for Learning on Graphs},
               year = {2021}
}

@article{hughes2024foundations,
    title={Foundations of Spatial Perception for Robotics: Hierarchical Representations and Real-time Systems},
    fullauthor={Nathan Hughes and Yun Chang and Siyi Hu and Rajat Talak and Rumaisa Abdulhai and Jared Strader and Luca Carlone},
    author={N. Hughes and Y. Chang and S. Hu and R. Talak and R. Abdulhai and J. Strader and L. Carlone},
    journal={The International Journal of Robotics Research},
    doi={10.1177/02783649241229725},
    url={https://doi.org/10.1177/02783649241229725},
    year={2024},
}

Installation

Make a virtual environment:

# if you don't have virtualenv already
# pip3 install --user virtualenv
cd path/to/env
python3 -m virtualenv --download -p $(which python3) hydra_gnn

Activate the virtual environment and install:

cd path/to/installation
git clone [email protected]:MIT-SPARK/Hydra-GNN.git
cd Hydra-GNN
source path/to/env/hydra_gnn/bin/activate
pip install -e .

The training code primarily relies on

While a default install should provide everything necessary, you may need to make sure the versions align correctly for these packages and are compatible with your CUDA version. You can either specify the desired versions in setup.cfg or manually install these libraries.

This code has been tested with:

  • PyTorch 2.0.1, PyTorch Geometric 2.3.1, and Cuda 11.7
  • PyTorch 1.12.1, PyTorch Geometric 2.2.0, and Cuda 11.3
  • PyTorch 1.8.1, PyTorch Geometric 2.0.4, and Cuda 10.2

Dataset Organization

All datasets and resoruces (such as the pre-trained word2vec model) live in the ./data folder. It is organized as follows:

  • data
    • GoogleNews-vectors-negative300.bin
    • Stanford3DSceneGraph
    • house_files (can be obtained from the habitat mp3d dataset following the download instructions here)
    • mp3d_benchmark
    • mpcat40.tsv
    • tro_graphs_2022_09_24

Steps to get started for training:

  1. Obtain the word2vec model from here
  2. Obtain the house files for each mp3d scene from the MP3D dataset and extract them to the house_files directory
  3. Obtain the Hydra-produced scene graphs from here
  4. (optional) Obtain the Stanford3D tiny split from here

Training

Before training, you must construct the relevant pytorch-geometric dataset. For Stanford3D, you can do that via

python scripts/prepare_Stanford3DSG_for_training.py

and for MP3D you can do that via

python scripts/prepare_hydra_mp3d_training.py --repartition_rooms 

To train with Neural Tree algorithm, you need to decompose the input graphs to H-trees with --save_htree. For MP3D, --repartition_rooms replaces room nodes from Hydra's room segmentation algorithm by ground truth rooms, and establish room-object connectivity using detected object geometry. You can view other possible arguments in both scripts with --help.

Training for a specific dataset can be run via

python scripts/train_Stanford.py

or

python scripts/train_mp3d.py

Running with Hydra

We provide pre-trained models here

First, start the GNN model via

./bin/room_classification_server server path/to/pretrained/model path/to/hydra/label/space

For the uhumans2 office, this would look like

./bin/room_classification_server server path/to/pretrained/model path/to/hydra/config/uhumans2/uhumans2_office_typology.yaml

Then, start Hydra with the use_zmq_interface:=true argument. For the uhumans2 office scene, this would look like:

roslaunch hydra_ros uhumans2.launch use_zmq_interface:=true

Notebooks

There are several development notebooks available under the directory notebooks. These require jupytext to view and use. You can do

pip3 install jupytext

to view and use them.

Authorship

  • Primary author is Siyi Hu

  • H-tree Construction was written by Rajat Talak

  • Example inference server for Hydra was written by Nathan Hughes

About

Room classification network training and inference code

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages