Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimized support function for large meshes #64382

Merged
merged 1 commit into from
Oct 27, 2022

Conversation

peastman
Copy link
Contributor

This is the second piece of addressing #48587. get_support() and project_range() are both currently implemented with full scans over all vertices. For a large mesh, that's very slow. I used the approach suggested in https://graphics.stanford.edu/courses/cs468-01-fall/Papers/cameron.pdf, which is to search along the surface, moving between adjacent vertices to increase the support value.

To make this fast, we want a good starting point to search from. I precompute the extreme vertices along 26 directions. That gives us a short list of vertices to start with, one of which should be close to the true support point.

To benchmark it, I measured the time for 1 million calls to get_support() along random directions. I did that for meshes of varying size with the current implementation and with this PR. Here are the times in seconds for 1 million calls, or equivalently the average time in microseconds for a single call.

Vertices Original This PR
10 0.048 0.065
52 0.170 0.132
452 1.069 0.175
4986 10.614 0.344

When combined with #63702, the total cost of collision detection scales very weakly with the number of vertices, roughly as O(sqrt(N)) in the limit of very large meshes. For two meshes that each have several thousand vertices, the total time to compute a collision is measured in microseconds.

@peastman peastman requested a review from a team as a code owner August 14, 2022 03:45
@Chaosus Chaosus added this to the 4.0 milestone Aug 14, 2022
@Chaosus
Copy link
Member

Chaosus commented Aug 14, 2022

Please squash the commits as required by our pipeline.

@reduz
Copy link
Member

reduz commented Aug 14, 2022

This is very interesting. I was planning to do a different type of optimization for these kind of shapes which should hopefully be faster as its O(1).

My idea is to use octahedral encoding for all possible directions. This way, a support function maps a normalized 2D unit vector to a 2D position in a 2D texture, then it only needs to check the 4 surrounding points found and one of them is going to be the correct result. There are some quirks in the implementation that are not so obvious if you are not familiar with type of encoding that I would need to look up if anyone is interested in doing.

@reduz
Copy link
Member

reduz commented Aug 14, 2022

In any case, if you are interested in physics engine discussion, I would be happy to invite you to the #physics channel of our rocketchat instance. https://chat.godotengine.org

@peastman
Copy link
Contributor Author

Is your idea to encode the support function as an image? I think that's equivalent to replacing the high resolution mesh with a simplified one whose vertices correspond to the pixels in the image? Or you could think of it as applying a lowpass filter to the support function. It should work well for smooth meshes, less well for ones with fine details. Think of a long thin object like a spear. If the tip of the spear doesn't exactly line up with a pixel, it will get shortened.

Another possibility is to combine the two approaches. Use the image to get the initial guess for the vertex, then follow with the hill climbing stage to find the exact support point. That would trade off more work at object creation time (depending on the size of the image) to reduce the number of iterations of hill climbing when a collision happens.

In any case, if you are interested in physics engine discussion, I would be happy to invite you to the #physics channel of our rocketchat instance.

That would be great!

servers/physics_3d/godot_shape_3d.h Outdated Show resolved Hide resolved
servers/physics_3d/godot_shape_3d.cpp Outdated Show resolved Hide resolved
@reduz
Copy link
Member

reduz commented Sep 22, 2022

For #63702 I will most likely do a combination of Gauss map discarding of edge pairs and GJKEpa for convexes with too many vertices (which is already integrated in the codebase and works well) instead of MPR (because I fear based on previous experience and experience with Bullet that the fact its approximate collision may introduce problems in some corner cases). But this one with the changes noted I think should be fine and is likely simpler/better than what I wanted to do.

@peastman
Copy link
Contributor Author

peastman commented Oct 2, 2022

I went ahead and fixed the above in the more verbose way. Let me know if it looks ok now.

@akien-mga
Copy link
Member

akien-mga commented Oct 12, 2022

CI is failing due to sign-compare warnings raised by both GCC and Clang:

servers/physics_3d/godot_shape_3d.cpp:827:19: error: comparison of integers of different signs: 'int' and 'unsigned int' [-Werror,-Wsign-compare]
        if (vertex_count > 3 * extreme_vertices.size()) {
            ~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
servers/physics_3d/godot_shape_3d.cpp:858:20: error: comparison of integers of different signs: 'int' and 'unsigned int' [-Werror,-Wsign-compare]
        for (int i = 0; i < extreme_vertices.size(); i++) {
                        ~ ^ ~~~~~~~~~~~~~~~~~~~~~~~
servers/physics_3d/godot_shape_3d.cpp:865:30: error: comparison of integers of different signs: 'unsigned int' and 'int' [-Werror,-Wsign-compare]
        if (extreme_vertices.size() == mesh.vertices.size()) {
            ~~~~~~~~~~~~~~~~~~~~~~~ ^  ~~~~~~~~~~~~~~~~~~~~
servers/physics_3d/godot_shape_3d.cpp:875:21: error: comparison of integers of different signs: 'int' and 'unsigned int' [-Werror,-Wsign-compare]
                for (int i = 0; i < vertex_neighbors[best_vertex].size(); i++) {
                                ~ ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
servers/physics_3d/godot_shape_3d.cpp:1129:30: error: comparison of integers of different signs: 'unsigned int' and 'int' [-Werror,-Wsign-compare]
        if (extreme_vertices.size() < mesh.vertices.size()) {
            ~~~~~~~~~~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~~~~~~~~

Please also squash the commits and this should be ready to merge.

@fabriceci
Copy link
Contributor

I did a basic test, adding convex shapes (low and high vertices) to a scene and checking the FPS, but I don't see any improvement.

Could you please share your test project?

@peastman
Copy link
Contributor Author

A test scene is attached. It involves a collision between two object that each have a few hundred vertices. It's unplayably slow either with or without this change, but much faster with the change than without.

Because of the way the SAT code works (it calls project_range() rather than get_support()), the new algorithm is only used for meshes with more than about 80 vertices. I originally wrote this to be combined with #63702, which is support based and therefore provides more benefit even for smaller meshes. I believe @reduz is currently planning to implement GJK instead, which is also support based so it should similarly benefit.

gd4test4.zip.txt

@akien-mga
Copy link
Member

I ran the workflows to see if it passes CI but please note that this still requires to be squashed before it can be merged.
See PR workflow for instructions.

@peastman
Copy link
Contributor Author

There was one more signed/unsigned comparison that needed fixing first. I just made the change. I can squash it once all tests are passing.

Github has a really useful feature that can do that automatically. In the repository settings, go to the General tab and make sure "Allow squash merging" is checking. Once it is, you can click on the arrow next to the "Merge" button for a PR and select "Squash and merge". A benefit of doing it that way is that it generates a name for the squash commit that contains the name and number of the PR. When you look through the commit history, every commit becomes a hyperlink to the corresponding PR. Just something to consider for the future.

@akien-mga
Copy link
Member

We don't use the squash-and-merge feature GitHub provides for various reasons. We prefer that PRs are squashed before merging so that the history we merge matches the one in the PR.

@peastman
Copy link
Contributor Author

Ok, no problem. I'll squash once we get a clean build.

Is there a flag I can pass to scons so it will report warnings when building locally?

@akien-mga
Copy link
Member

Yes, you can use warnings=extra (and werror=yes to treat as errors), or dev_mode=yes which would set both (+ verbose=yes and tests=yes).

@peastman
Copy link
Contributor Author

Thanks! No problems reported when I build with that option, so hopefully CI will be happy this time.

@peastman
Copy link
Contributor Author

I guess cl is a bit more picky than gcc...

@peastman
Copy link
Contributor Author

All tests passed, and I squashed the commits. It should be ready to merge.

@clayjohn clayjohn merged commit aa989cb into godotengine:master Oct 27, 2022
@clayjohn
Copy link
Member

Thanks and congratulations on your first merged contribution! I'm really forward to the physics optimizations you have in mind

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants