Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to train customed data using train_deep_ls.py #12

Open
fangchuan opened this issue Oct 24, 2022 · 1 comment
Open

Fail to train customed data using train_deep_ls.py #12

fangchuan opened this issue Oct 24, 2022 · 1 comment

Comments

@fangchuan
Copy link

Hi, thank you for release this fantastic work, I adapt your codes to train my data(warp from iGibson_obj). Everything is fine until I went into the train_deep_ls.py, It seems that python fail to handle DataLoader or anyother variable related to gradient when using multiprocess toolkit.

res = pool.map(functools.partial(trainer,
sdf_tree = sdf_tree,
sdf_grid_radius = sdf_grid_radius,
lat_vecs = lat_vecs,
sdf_data = sdf_data,
indices = indices,
cube_size = cube_size,
outer_sum = outer_sum,
outer_lock = outer_lock,
decoder = decoder,
loss_l1 = loss_l1,
do_code_regularization = do_code_regularization,
code_reg_lambda = code_reg_lambda,
epoch = epoch),
enumerate(sdf_grid_indices))

I havnt figure out how to solve this problem, could you help me? Please. @Kamysek @Freephi

There is the printed logging message:
image
image

My server used in this experiment is configured as below:
python: 3.6.13
torch: 1.4.0
cuda: 10.1
os: ubuntu 18.04

@sunyuanfu
Copy link

Hello.I wondr if you have solved your problem? I have trouble in setting up the environment too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants