-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[python/viewer] Add support of user-specified extra cameras (rgb and depth). #826
Conversation
duburcqa
commented
Jul 14, 2024
•
edited
Loading
edited
@mwulfman This is just a POC. Applying filters to make the depth map more realistic is still necessary. Do you have any reference or even source code about this topic ? It would be very helpful.
|
If it can help you, here's what I started to draft. It lacks filtering capabilities on the output images. I believe, besides the approach (using class versus adding it to the viewer), it is very similar to what you've done. I've been considering using cv2 directly for doing the filter. I believe capturing the frame takes around the same amount of time than you (1-2 ms)
|
5d5a2c9
to
f2d2875
Compare
I made some improvements. Now it should be twice faster, ie about 1.25ms to render the depth map. I will see if it is possible to further optimise the rendering pipeline, but I'm afraid it is unlikely. |
I have an unreliable setup where computation of the depth map takes only 570us on my machine. If I make find a way to achieve this speed without breaking anything else, then using depth map in RL may be an option ! Edit: I found a reliable way 🎉 @mwulfman it would be nice if you could benchmark it on a machine with Nvidia GPU. Edit2: After some in-depth profiling, this appears to be the absolute limit because now 90% of the time is spent on copying the result from the GPU to the CPU. I don't think it is possible to speed up this operation but I will have a look. The good thing is that decreasing the resolution is now improving the timing, eg it takes 390us for 64 x 64 depth map. |
ef25e78
to
27e68aa
Compare
… and depth map). Significantly speed-up both offscreen and onscreen rendering.
* [core] Fix robot serialization issue. (#821) * [core] Minor improvement periodic Perlin process and periodic stair ground. (#799) * [core] 'PeriodicGaussianProcess' and 'PeriodicFourierProcess' are now differentiable. (#799) * [core] Fix negative time support for all existing random processes. (#799) * [core] Add N-dimension Perlin processes. (#799) (#823) * [core] Add gradient computation for all Perlin processes. (#799) (#823) (#825) * [core] Make all Perlin processes faster and copy-able. (#799) * [core] Add Perlin ground generators. (#799) * [core] Replace MurmurHash3 by xxHash32 which is faster. (#824) * [core] Make gradient computation optional for heightmap functions. (#824) * [jiminy_py] Fix 'tree.unflatten_as' mixing up key order for 'gym.spaces.Dict'. (#819) * [python/simulator] Consistent keyword arguments between 'Simulator.build' and 'Simulator.add_robot'. (#821) * [python/viewer] Fix MacOS support. (#822) * [python/viewer] Add support of user-specified extra cameras (rgb and depth). (#826) * [python/viewer] Significantly speed-up both offscreen and onscreen rendering for Panda3D. (#826) * [gym/common] More generic stacking quantity. (#812) * [gym/common] Add termination condition abstraction. (#812) * [gym/common] Add quantity shift and drift tracking termination conditions. (#812) * [gym/common] Add support of termination composition in pipeline environments. (#812) * [gym/common] Add base roll/pitch termination condition. (#813) * [gym/common] Add base relative height termination condition. (#813) * [gym/common] Add foot collision termination condition. (#813) * [gym/common] More generic actuated joint kinematic quantity. (#814) * [gym/common] Add multi-ary operator quantity. (#814) * [gym/common] Add safety limits termination condition. (#814) * [gym/common] Add robot flying termination condition. (#815) * [gym/common] Add power consumption termination condition. (#816) * [gym/common] Add ground impact force termination condition. (#816) * [gym/common] Add base odometry pose drift tracking termination condition. (#817) * [gym/common] Add motor positions shift tracking termination condition. (#817) * [gym/common] Add relative foot odometry pose shift tracking termination conditions. (#820) * [gym/common] Add unit test checking that observation wrappers preserve key ordering. (#820) * [gym/common] Fix quantity hash collision issue in quantity manager. (#821) * [gym/common] Refactor quantity management to dramatically improve its performance. (#821) * [gym/common] Add 'order' option to 'AdditiveReward'. (#821) * [misc] Fix missing compositions documentation. (#812) --------- Co-authored-by: Mathias Wulfman <[email protected]>
Just seen your message. Do you have a reliable script that I can run to benchmark the depth map acquisition speed? |
Yes, execute the script provided in the PR description in %timeit Viewer.capture_frame(camera_name="depth") PS: You must install jiminy 1.8.8 or any commit that was merged after this PR. |
I am getting the following error after installing jiminy 1.8.8 and trying to run the script in an ipython:
Looks like there is a small arguments inversion typo. |
Arf damn, the code being merged is broken, what a shame... I dirty-fixed the snippet in PR description. It should be enough to run the benchmark! |
Here the results on Nvidia GPU RTX 3060:
Congrats on getting such a speed up! |
Nice ! These results are encouraging, thanks :) Apparently, the same performance is the same as running on Intel integrated, which makes sense because computations are too small to take avantage of the available resources. If you have time, I'm curious to see if the timing changes after increasing the resolution:
or even
|
200x200:
500x500:
|
Ok perfect, just like I expected :) From these results, I think with a powerful GPU it should not be a problem to generate the depth map for 10 / 20 env in parallel without slowdown if the resolution is small enough. Hopefully it is now fast enough to run learning algorithms with it, but the post-processing is still missing ! |
# https://docs.panda3d.org/1.10/python/programming/camera-control/perspective-lenses # noqa: E501 # pylint: disable=line-too-long | ||
lens = PerspectiveLens() | ||
if is_depthmap: | ||
lens.set_fov(50.0) # field of view angle [0, 180], 40° by default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm relooking at this. Shouldn't these arguments be part of the method's arguments ?
add_camera(self, name: str, is_depthmap: bool, window_size: Tuple[int, int], lens_near_far: Tuple[float, float], field_of_view: Union[float, Tuple[float,float]) -> None:
I find it quite impractical to have to change these after adding the camera. This is especially the case because for changing these parameters, you need to access Viewer._backend_obj.render
(which you have to handle carefully in this context).
What do you think @duburcqa ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. There are still rough corners that need to be polished. I merged this in a rush because I had to move to something else. But else it should be moved as input arguments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See #826 (comment)