Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues with up_ratio #33

Open
aniqueakhtar opened this issue Jan 16, 2021 · 5 comments
Open

Issues with up_ratio #33

aniqueakhtar opened this issue Jan 16, 2021 · 5 comments

Comments

@aniqueakhtar
Copy link

aniqueakhtar commented Jan 16, 2021

I am trying to trian the model for up_ratio=8 but I realized there are a lot of error in the code that doesn't make it possible to train for that up_ratio.

E.g.:
in model.py, line 34:
self.input_y = tf.placeholder(tf.float32, shape=[self.opts.batch_size, int(4*self.opts.num_point),3])
Maybe this should be:
self.input_y = tf.placeholder(tf.float32, shape=[self.opts.batch_size, int(self.opts.up_ratio*self.opts.num_point),3])

Similarly, in model.py, line 253:

      for i in range(round(math.pow(self.opts.up_ratio, 1 / 4)) - 1):
          self.pred_pc = Gen(self.pred_pc)

This second issue I do not understand. What is the purpose of this line? Should this not be here?

Also found this issue in data_loader.py, line 36:
num_4X_point = int(opts.num_point*4)

Any help would be appreciated.
Thanks

@adenrteixeira
Copy link

I'm also trying for a different up_ratio. Were you able to do it?

@aniqueakhtar
Copy link
Author

Yes. I got it to work.
Looking back at my code now, I see I used this in model.py:

self.input_y = tf.placeholder(tf.float32, shape=[self.opts.batch_size, int(self.opts.up_ratio*self.opts.num_point),3])

I trained multiple versions of PU-GAN with different upsampling ratios.

During testing, I would edit the lines 22, 23, and 24 in pu_gan.py. I would also edit the configs.py file for each upsampling ratio.

I also found another mistake during the testing phase. It is in line 262, 263, and 264. The code is measuring the output points for the first point and is then assuming that all the other point clouds being fed in are the same size. So if your first point cloud was 2k point with 4x upsampling. The testing would output 8k points. Even when you feed it 8k point cloud input, the output would be 8k points. So you should find the number of output points within the loop.
I added these lines after line 270, while commenting out line 262, 263, and 264.

self.opts.num_point = pc.shape[0]
out_point_num = int(self.opts.num_point*self.opts.up_ratio)

I am not sure what other changes I made to the code since it has been a long time.

Feel free to send me an email and I can share my code (a bit messy). We can discuss this work further.
I was also able to compare PU-GAN to my own implementation of Upsampling for point clouds called PU-Dense.

@adenrteixeira
Copy link

adenrteixeira commented Feb 28, 2022

Hey !! ty for the response!

I tried to do what you've done but i still got this error:

KeyError: "Unable to open object (object 'poisson_2560' doesn't exist)"

Its because the h5 file. Since isto not 4256 =1024 (the one that is available) and is 10256=2560 i got this error? Did you changed something on the data_loader too?

@aniqueakhtar
Copy link
Author

Just comparing the two files, I can't see any changes.
I trained it on my own dataset. Which I generated with a code that looked similar to this:

import h5py
import glob
import open3d as o3d
import numpy as np
import os

data_loc = './ShapeNet/'
save_path = 'train/data_8x.h5'

data_files = sorted(glob.glob(data_loc+'*.h5'))

N = 256
r = 8

poisson_256 = []
poisson_2048 = []
for i,m in enumerate(data_files):
    if i%500==0:
        print(i, '  /  ', len(data_files))
    coords = h5py.File(m, 'r')['data'][:,:3]
    
    pcd = o3d.geometry.PointCloud()
    pcd.points = o3d.utility.Vector3dVector(coords)
    pcd_tree = o3d.geometry.KDTreeFlann(pcd)
    
    n = np.random.randint(0,len(coords),1)[0]
    [_, idx,_ ] = pcd_tree.search_knn_vector_3d(coords[n], N*r)
    pc = coords[idx]
    if pc.shape[0] != N*r:
        print(i)
        print(len(idx))
        continue
    
    pc -= pc.min(0)
    pc = pc / pc.max()
    
    pc_down = pc[np.random.randint(0,len(pc),N)]
    
    poisson_2048.append(pc)
    poisson_256.append(pc_down)
    

poisson_2048 = np.stack(poisson_2048)
poisson_256 = np.stack(poisson_256)


h5f = h5py.File(save_path, 'w')
h5f.create_dataset('poisson_2048', data=poisson_2048)
h5f.create_dataset('poisson_256', data=poisson_256)
h5f.close()

So basically randomly choosing a single point on each point cloud to extract a single patch. I believe I had 24k point clouds. so ended up with 24k patches.

@adenrteixeira
Copy link

Thank you so much!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants