You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"fg_pxl" corresponds to the location of seeds sampled in MDU. "fg_real_pixels" corresponds to the locations of projected lidar points. Here, locations are represented in the form of (u, v, d), where u and v are pixel coordinates and d is the depth.
fg_info = img_metas[sample_idx]['foreground2D_info'] fg_pxl = fg_info['fg_pixels'][view_idx] fg_depth = torch.from_numpy(fg_pxl[:,2]).to(device) in line 207 in MSMDFusionDetector and fg_real_pixels = img_metas[i]['foreground2D_info']['fg_real_pixels'] depth = fg_real_pxl[:,2]
code above shows that you have use two depth information in the generation of foreground2D, could you tell me the deffrence between them ?
Hello, have you trained this model? I'm using A800 here but still not enough to train. If you've trained, can you give me an answer to the reason for my lack of memory here, thank you
fg_info = img_metas[sample_idx]['foreground2D_info']
fg_pxl = fg_info['fg_pixels'][view_idx]
fg_depth = torch.from_numpy(fg_pxl[:,2]).to(device) in line 207 in MSMDFusionDetector
and
fg_real_pixels = img_metas[i]['foreground2D_info']['fg_real_pixels']
depth = fg_real_pxl[:,2]
code above shows that you have use two depth information in the generation of foreground2D, could you tell me the deffrence between them ?
The text was updated successfully, but these errors were encountered: