Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[misc] Add windows powershell scripts #36

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -20,3 +20,6 @@ data/*.MOV
data/*.mp4
data/colmap_text
data/transforms.json
lightning_logs
*.zip
external
18 changes: 18 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,11 @@ To reach the best performance, here are the steps to follow:
3. Uncomment `--half2_opt` to enable half2 optimization in the script, then `./scripts/train_nsvf_lego.sh`. For now, half2 optimization is only supported on Linux with Graphics Card Architecture >Pascal.


**For Windows users**
```
./scripts/train_nsvf_lego.ps1
```

### 360_v2 dataset

Download [360 v2 dataset](http://storage.googleapis.com/gresearch/refraw360/360_v2.zip) and unzip it. Please keep the folder name unchanged. The default `batch_size=8192` takes up to 18GB RAM on a RTX3090. Please adjust `batch_size` according to your hardware spec.
Expand All @@ -50,6 +55,12 @@ Download [360 v2 dataset](http://storage.googleapis.com/gresearch/refraw360/360_
./scripts/train_360_v2_garden.sh
```

**For Windows users**
```
./scripts/train_360_v2_garden.ps1
```


## Train with your own video

Place your video in `data` folder and pass the video path to the script. There are several key parameters for producing a sound dataset for NeRF training. For a real scene, `scale` is recommended to set to 16. `video_fps` determines the number of images generated from the video, typically 150~200 images are sufficient. For a one minute video, 2 is a suitable number. Running this script will preprocess your video and start training a NeRF out of it:
Expand All @@ -58,6 +69,13 @@ Place your video in `data` folder and pass the video path to the script. There a
./scripts/train_from_video.sh -v {your_video_name} -s {scale} -f {video_fps}
```

**For Windows users**
```
./scripts/train_from_video.ps1
```
You need to download the colmap to extract the camera posture information. Download the colmap from [here](https://github.com/colmap/colmap/releases), and rename the directory to "colmap" and put it in the "external" directory under the project directory.


## [Preview] Mobile Deployment

Using [Taichi AOT](https://docs.taichi-lang.org/docs/tutorial), you can easily deploy a NeRF rendering pipeline on any mobile devices!
Expand Down
2 changes: 1 addition & 1 deletion data/colmap2nerf.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ def run_colmap(args):

# On Windows, if FFmpeg isn't found, try automatically downloading it from the internet
if os.name == "nt" and os.system(f"where {colmap_binary} >nul 2>nul") != 0:
colmap_glob = os.path.join(ROOT_DIR, "external", "colmap", "*", "COLMAP.bat")
colmap_glob = os.path.join(ROOT_DIR, "external", "colmap", "COLMAP.bat")
candidates = glob(colmap_glob)
if not candidates:
print("COLMAP not found. Attempting to download COLMAP from the internet.")
Expand Down
1 change: 1 addition & 0 deletions scripts/train_360_v2_garden.ps1
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
python train.py --root_dir ./360_v2/garden --dataset_name colmap --exp_name garden --downsample 0.25 --no_save_test --num_epochs 20 --scale 16.0 --gui
20 changes: 20 additions & 0 deletions scripts/train_from_video.ps1
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Put your video in data/ folder and update filename VIDEO_FILE
# SCALE choose from 1, 4, 8, 16, 64; 16 is recommended for a real screne
# VIDEO_FPS = 2 is suitable for a one minute video
set VIDEO_FILE 'video.mp4'
set SCALE 16
set VIDEO_FPS 2

echo "video path $VIDEO_FILE"
echo "scale $SCALE"
echo "video fps $VIDEO_FPS"

cd "data"

python colmap2nerf.py --video_in $VIDEO_FILE --video_fps $VIDEO_FPS --run_colmap --aabb_scale $SCALE --images images

Move-Item colmap_sparse sparse
cd ..


python train.py --root_dir data --dataset_name colmap --exp_name custom --downsample 0.25 --num_epochs 20 --scale $SCALE --gui
1 change: 1 addition & 0 deletions scripts/train_nsvf_lego.ps1
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
python train.py --root_dir "./Synthetic_NeRF/Lego" --exp_name Lego --perf --num_epochs 20 --batch_size 8192 --lr 1e-2 --no_save_test --gui --ckpt_path=./ckpts/nsvf/Lego/epoch=19-v2.ckpt --val_only