Network details and how to setup can be found here: https://neat-eo.pink. Advised to walk through the tutorial.
The xView2 dataset can be downloaded from https://xview2.org/dataset (login required).
Extract the contents of the downloaded .tar file:
tar -xvzf <file.tar>
Place the extracted 'images' and 'labels' folders in the 'data/xview' folder. For training and testing the model all data releases are combined.
The training dataset can be created by executing the following command:
python neat-eo/preprocess_xview.py --config config.toml --crop 512 512
The above command will create the dataset as per the specifications and splits the 1024 x 1024 images into four 512 x 512 images.
preprocess_xview.py
accepts the command line arguments described below,
usage: preprocess_xview.py [-h] --config CONFIG --crop WIDTH HEIGHT
optional arguments:
-h, --help Show this help message and exit (NEEDS TO BE ADDED)
--config CONFIG Path to config file
--crop Crops image into smaller images of specified width and height
(significantly increases processing time)
Stored on sharepoint
train.tar
as downloaded from https://xview2.org/dataset
Images corresponding to nepal-flooding disaster which were not part of train.tar
.
Mean IOU: 0.6927478022575819
Image | Label |
---|---|
Prediction | Diff map |
---|---|
Image | Label |
---|---|
Prediction | Diff map |
---|---|
Image | Label |
---|---|
Prediction | Diff map |
---|---|
...