Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uninformative error when affinity matrix creation throws bad_alloc #104

Closed
alexsmartens opened this issue Mar 15, 2017 · 1 comment
Closed
Assignees
Labels

Comments

@alexsmartens
Copy link

alexsmartens commented Mar 15, 2017

Hi Pete,

I have successfully installed the last version of CPD (on my laptop).
I've compiled a nonrigid program based on the rigid example, which you have with the distributive.

I've tested the nonrigid example on small artificial datasets first (10 points each) and on real datasets (2M points each). The first set of datasets was processed fine. But I got the error with the second set of datasets:

./cpd-nonrigid pt_in.txt pt_target.txt
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)

What might be the problem? Have you had a chance to test the algorithm on large datasets?

@alexsmartens alexsmartens changed the title Input format Input format & build problem Mar 15, 2017
@alexsmartens alexsmartens changed the title Input format & build problem Input format Mar 15, 2017
@alexsmartens alexsmartens changed the title Input format Input size Mar 16, 2017
@gadomski
Copy link
Owner

Yeah, two million points is most likely too much for the nonrigid. The nonrigid registration creates an MxM affinity matrix, where M is the number of points in the source dataset. The nonrigid lowrank transformation might be able to work with that many points, but it does not yet exist in this version of the library (see #57).

In my experience, I usually chop up datasets into ~10-20 thousand point chunks and run each chunk through CPD, then re-aggregate the result. I'd be very interested to hear about successful runs with larger datasets.

I'm going to keep this issue open because I should add better error reporting if the affinity matrix doesn't allocate — this is likely to be a common choke-point. Thanks for the report.

@gadomski gadomski self-assigned this Mar 16, 2017
@gadomski gadomski changed the title Input size Uninformative error when affinity matrix creation throws bad_alloc Mar 16, 2017
@gadomski gadomski added the bug label Mar 16, 2017
gadomski added a commit that referenced this issue Mar 20, 2017
Better allocation error when creating affinity matrix
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants