Skip to content

This is the Vision System (Object Dection & Recognition) for EU H2020 project RoMaNs

Notifications You must be signed in to change notification settings

kevinlisun/romans_stack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Brief Description

  • odr: object detection and recognition framework.
  • camera: camera simulator and configurations.
  • dcnn: Deep Convolutional Neural Network architectures and training scripts.
  • matlab_toobox: matlab scripts for data pre-processing.

Prerequsites

Dependencies:

Install romans_stack

  1. Create a catkin workspace:
$ mkdir ~/catkin_ws/src
$ cd ~/catkin_ws/src
  1. Create .rosinstall file and copy the followings:
- git: {local-name: romans_stack, uri: 'https://github.com/sunliamm/romans_stack', version: master}
- git: {local-name: iai_kinect2, uri: 'https://github.com/code-iai/iai_kinect2', version: master}
  1. Clone the repositories:
$ wstool update
$ rosdep install --from-paths src --ignore-src -r -y
$ cd ..
  1. Compile:
$ catkin_make -DCMakeType=RELEASE
  1. Add ROS workspace to the environment.

add source ~/catkin_ws/devel/setup.bash to ~/.bashrc

Run the Demo

Download the demo data (demo.rosbag file) and the trained caffe model (deploy.proto, romans_model_fast.caffemodel) from: https://drive.google.com/open?id=0B0jMesRKPfw9MGM4ekxiV2M1RWs This demo assumes you download the 'romans' folder and put it under home directory (cd ~), change the directory depending your situation.

  1. Get RGBD stream from rosbag .
$ roslaunch camera kinect2_simulator.launch
$ cd ~/romans/data & rosbag play --clock demo.bag

Or get RGBD stream from kinect2

$ roslaunch camera kinect2.launch
  1. Run detection node .
$ rosrun odr detection_server_kinect2
  1. Run recognition node .
$ rosrun odr inference_end2end.py /home/your_username/romans/models/fast
  1. Run visualization node .
$ rosrun odr visualization_server
  1. Run the client .
$ rosrun odr odr_test.py kinect2

the semi-supervised demo in Washington RGBD dataset

  1. Download the datset: http://rgbd-dataset.cs.washington.edu/dataset/rgbd-dataset_eval/

  2. create the experiment go to matlab_toolbox and

    $ run script_create_experiment.m
    

    , and then split into labelled and unlabelled set

    $ run slipt_labelled_unlabelled.m
    
  3. label propagation, it takes several hours:

    $ cd ~/catkin_ws/src/romans_stack/odr/washington
    $ sh sh all_in_one.sh
    $ cd ~/catkin_ws/src/romans_stack/odr
    $ sh ./washington/all_in_one2.sh
    
  4. train the dcnn with automatic labelled examples: go to matlab_toolbox and create the index for caffe

    $ run script_create_index_for_caffe.m
    $ cd ~/catkin_ws/src/romans_stack/dcnns/washington/semi_supervised
    $ sh train.sh
    

Programming Style

This implementation is following:

ROS C++ style: http://wiki.ros.org/CppStyleGuide

Python REP8 style: http://www.ros.org/reps/rep-0008.html

Reference

Li Sun, Cheng Zhao, Rustam Stolkin. Weakly-supervised DCNN for RGB-D Object Recognition in Real-World Applications Which Lack Large-scale Annotated Training Data. ArXiv

About

This is the Vision System (Object Dection & Recognition) for EU H2020 project RoMaNs

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published