Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Projecting the mesh to 2d correctly in Python #140

Closed
kylemcdonald opened this issue Jun 24, 2017 · 15 comments
Closed

Projecting the mesh to 2d correctly in Python #140

kylemcdonald opened this issue Jun 24, 2017 · 15 comments

Comments

@kylemcdonald
Copy link

First, thanks so much for this great library! :)

I'm trying to project the mesh returned by eos back onto the original image. I can see there are examples of doing this in C++ but I'm working on Python for now. What I've noticed is that the mesh always seems to be centered on the image.

https://gist.github.com/kylemcdonald/fadf8e8778d14a918bd8275d2807e283

My suspicion is that either:

  1. The viewport I should be using to convert from homogenous coordinates to screen space is actually the landmark bounding box rather than the image bounding box. I tried this and it gets the points closer but doesn't look correct to me.
  2. I'm missing something else in terms of point projection.

I looked into the eos source to try and find an example, and checked the glm::project function, but couldn't find any clear indicators of what I'm doing wrong. If you have any advice I would really appreciate it. Thanks!

@patrikhuber
Copy link
Owner

patrikhuber commented Jun 25, 2017

Thank you very much, and thanks for the feedback!

I'm actually currently working on improving the python bindings and pip package, but it'll be at least a few weeks. In any case what you're trying to do is in principle trivial and I did it in the past in python - as you mentioned there's probably just a small mistake somewhere, let's find it.

You're correct that the 3D mesh is always centered, that's by design - it lives in the 3D model space. It should end up at the correct location by applying the modelview, projection and viewport transform (in that order). Exactly what you did in principle.

If you could check the following:

  • Do you apply the rotations in the correct order? I.e. let's say a vertex v is a 3-dimensional column-vector, then it would be Viewport * Projection * Modelview * v. v can also be a row-vector, then I think you have to flip the order. (this is I think what you do in your project_points function)

  • I think your viewport transformation and specifically the flipping may not be correct. If your image origin is on the top-left, like it is in OpenCV, you should need to flip the y axis, but not the x axis - I think you flip both? See here how the viewport is defined for that setup, note the minus:

    inline glm::vec4 get_opencv_viewport(int width, int height)

What you could maybe most easily do is construct a 4x4 viewport matrix like I do here for the Matlab bindings: https://github.com/patrikhuber/eos/blob/master/matlab/include/mexplus_eos_types.hpp#L496-L503
And then just apply this matrix like I described above. I'm almost certain this will immediately work and is really easy to try.

Let me know whether that works, it really should!

@patrikhuber patrikhuber changed the title Projecting the mesh to 2d correctly Projecting the mesh to 2d correctly in Python Jun 25, 2017
@headupinclouds
Copy link

Here is a small C++ application that builds an explicit : viewport * project * model * view matrix along the lines @patrikhuber suggests above for 3d -> 2d wireframe projection. This is a refactoring of some of the eos samples with C++ equivalents of the same dlib models and calls you are using.

https://github.com/headupinclouds/hunter_eos_example/blob/master/eos-dlib-test.cpp#L60-L66

static void draw_wireframe(cv::Mat &image, const eos::core::Mesh& mesh, const GLMTransform &transform, const cv::Scalar &colour)
{
    const auto &modelview = transform.modelview;
    const auto &projection = transform.projection;
    const auto &viewport = transform.viewport;
    
#if EOS_OPTIMIZE_PROJECTION
    // http://glasnost.itcarlow.ie/~powerk/GeneralGraphicsNotes/projection/viewport_transformation.html
    const float x = viewport[0] + (viewport[2]/2.f);
    const float y = viewport[1] + (viewport[3]/2.f);
    glm::mat4 S = glm::scale(glm::mat4(1.0f), {viewport[2]/2.f, viewport[3]/2.f, 0.5});
    glm::mat4 T = glm::translate(glm::mat4(1.0f), {x, y, 0.5f});
    glm::mat4 M = T * S;
    std::cout << "viewport = " << glm::to_string(viewport) << std::endl;
    std::cout << "viewport_matrix = " << glm::to_string(M) << std::endl;
    const auto portProjModelView = (M * projection * modelview);
#endif
    
    std::vector<cv::Point2f> points(mesh.vertices.size());
    for(int i = 0; i < mesh.vertices.size(); i++)
    {
        const auto& v = mesh.vertices[i];
#if EOS_OPTIMIZE_PROJECTION
        // glm::project() expands to: Viewport * Project * Model * View
        //
        // glm::vec4 q1, q0 = modelViewProj * p0;
        // q1 = q0 / q0.w;
        // q1 = (q1 * 0.5f) + 0.5f;
        // q1[0] = q1[0] * viewport[2] + viewport[0];
        // q1[1] = q1[1] * viewport[3] + viewport[1];
        glm::vec4 p = portProjModelView * glm::vec4(v[0], v[1], v[2], 1.0); p /= p.w;
#else
        const auto p = glm::project({ v[0], v[1], v[2] }, modelview, projection, viewport);
#endif
        points[i] = {p.x, p.y};
    }

Adding an explicit viewport matrix (as above) to the python code should help => def project_points(points, modelview, projection, width, height) . For the eos/examples/data/image_0010.png sample image, the viewport parameters and matrix from the code above are printed as follows:

viewport = vec4(0.000000, 1024.000000, 1280.000000, -1024.000000)
viewport_matrix = mat4x4((640.000000, 0.000000, 0.000000, 0.000000), (-0.000000, -512.000000, -0.000000, -0.000000), (0.000000, 0.000000, 0.500000, 0.000000), (640.000000, 512.000000, 0.500000, 1.000000))

@tobyclh
Copy link

tobyclh commented Jul 12, 2017

Hello, I implemented the viewport matrix as suggested

def get_opencv_viewport(width, height):
    return np.array([0, height, width, -height])


def get_viewport_matrix(width, height):
    viewport = get_opencv_viewport(width, height)
    # print(viewport)
    viewport_matrix = np.zeros((4, 4))
    viewport_matrix[0, 0] = 0.5 * viewport[2]
    viewport_matrix[3, 0] = 0.5 * viewport[2] + viewport[0]
    viewport_matrix[1, 1] = 0.5 * viewport[3]
    viewport_matrix[3, 1] = 0.5 * viewport[3] + viewport[1]
    viewport_matrix[2, 2] = 0.5
    viewport_matrix[3, 2] = 0.5
    # print(viewport_matrix)
    return viewport_matrix

points = np.asarray([vpm.dot(projection).dot(modelview).dot(point) for point in points])
for point in points_2d:
    draw_circle(canvas, center.y+ point[1], center.x + point[0], r=2)

But the result is still somehow off. Any suggestion?

@patrikhuber
Copy link
Owner

@tobyclh Hi! Hmm. I'll try to find some time to have a look over the next days. At this point I still believe it could be some user error at some point in your code, but I'm not sure.
Please ping me again here if I don't get back in the next days.

@tobyclh
Copy link

tobyclh commented Jul 13, 2017

Okay I figured it out. so it was the fact that the 68 marker uses [x, y] but opencv draws in [y, x].
Make sure you convert them properly and apply the viewport matrix (the one I posted above is correct),
recenter it to the center of the image then you will be good to go!

@patrikhuber
Copy link
Owner

@tobyclh: Haha! Oh yea... I'm glad that you figured it out! Yep, OpenCV specifies most things in (y, x), because their images are "matrices" (cv::Mat) and the access there is specified as (row, column) - row is of course y in an image, and column is x. Whereas everywhere else, we normally use the [x, y] syntax.

@kylemcdonald Were you able to solve your problem too?

@tobyclh
Copy link

tobyclh commented Nov 8, 2017

Never saw this before
it could be the new updates introduced in Oct and this @patrikhuber may confirm this.
But you can simply fill the last dimension with 1's

@wassryan
Copy link

wassryan commented Nov 8, 2017

@tobyclh I follow your idea,but why I couldn't get the fitting mesh fit on the face correctly?Any suggestions to my problem?Thank youfile:///C:/Users/kary/Downloads/3d_mesh_fitting.html

@patrikhuber
Copy link
Owner

patrikhuber commented Feb 28, 2018

I've just written a python script that projects the 3D mesh points to 2D and it works without a problem. So I don't think there are any issues in the library with regards to this. I've used the RenderingParameters (the variable pose in the demo.py file) that the function eos.fitting.fit_shape_and_pose returns and projected the mesh points with the modelview and projection matrices and the viewport, exactly like the draw_wireframe function does in C++, and it produces exactly the same results, projecting onto the right spots in the original image.

Anybody feel free to re-open if there are any questions left.
Thank you all for contributing!

@qinhaifangpku
Copy link

@tobyclh hi~ I wonder how i can get the projection matrix and the modelview, your code could get the viewport, but we know we need modelview, viewport, projection matrix, then we can finnaly get the projetction results.
thank you in advance!

@ivomarvan
Copy link

I've just written a python script that projects the 3D mesh points to 2D and it works without a problem. So I don't think there are any issues in the library with regards to this. I've used the RenderingParameters (the variable pose in the demo.py file) that the function eos.fitting.fit_shape_and_pose returns and projected the mesh points with the modelview and projection matrices and the viewport, exactly like the draw_wireframe function does in C++, and it produces exactly the same results, projecting onto the right spots in the original image.

Anybody feel free to re-open if there are any questions left.
Thank you all for contributing!

@patrikhuber can you share the code in python?

@patrikhuber
Copy link
Owner

@qinhaifangpku @ivomarvan I don't have the code for that to hand or potentially I didn't save it back then. I might some day find the time to add it to the python demo.py but meanwhile it's really quite trivial and straightforward to do for everyone, just takes a bit of effort and thinking and a few lines of code.

There is one open issue (#219) where there might be an issue with the projection in Python in Release builds or the PyPi build but in Debug configuration it worked fine for a colleague of mine.

@francoisruty
Copy link

francoisruty commented Aug 10, 2019

Okay I figured it out. so it was the fact that the 68 marker uses [x, y] but opencv draws in [y, x].
Make sure you convert them properly and apply the viewport matrix (the one I posted above is correct),
recenter it to the center of the image then you will be good to go!

Why do you need to recenter it to the center of the image? I get everything you said, except that part
We shouldn't need to do this after multiplying by model_view, projection and viewport matrix, should we?
@tobyclh

@gigadeplex
Copy link

gigadeplex commented Sep 19, 2019

For future ppl having problems with this:
I made this little demo for rendering the points as 2d on the image in python https://github.com/gigadeplex/dlib-eos

@wpig-gif
Copy link

wpig-gif commented Dec 3, 2019

How would one modify the code from gigadeplex such that the texture is rendered as well? In other words, rendering the entire 3D model over the 2D image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants