Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW] FIL benchmark now works again with gpuarray-c input type #2209

Merged
merged 8 commits into from
Jul 13, 2020

Conversation

canonizer
Copy link
Contributor

  • FIL benchmark now works again with gpuarray-c input type
  • simplified _treelite_fil_accuracy_score

@canonizer canonizer requested a review from a team as a code owner May 6, 2020 20:05
@GPUtester
Copy link
Contributor

Please update the changelog in order to start CI tests.

View the gpuCI docs here.

@canonizer canonizer changed the title FIL benchmark now works again with gpuarray-c input type [REVIEW] FIL benchmark now works again with gpuarray-c input type May 6, 2020
Copy link
Contributor

@JohnZed JohnZed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More of a question than a review below...

Also, is there a quick test we can expand to use gpuarray-c so it doesn't break again?

else:
raise TypeError("Received unsupported input type")
# convert the input if necessary
y_pred1 = (y_pred.copy_to_host() if cuda.devicearray.is_cuda_ndarray(y_pred)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do these need to be host arrays? Seems like it should work with gpu arrays too

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do agree that the new code is simpler, though

Copy link
Contributor Author

@canonizer canonizer May 6, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't have to be a host array. However, it needs to be an array type that behaves like a numpy array, e.g. supports the > operator. Also, this part is not performance-critical, so a host array should be fine.

Any specific suggestions for a GPU array and a standard way to convert into it?

@canonizer
Copy link
Contributor Author

Added a test. Could you take another look?

Copy link
Contributor

@JohnZed JohnZed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One minor suggestion, but pre-approving as it all LGTM

python/cuml/test/test_benchmark.py Outdated Show resolved Hide resolved
@canonizer canonizer requested review from a team as code owners July 3, 2020 17:56
@canonizer canonizer changed the base branch from branch-0.14 to branch-0.15 July 3, 2020 18:04
@raydouglass raydouglass removed the request for review from a team July 7, 2020 15:19
@JohnZed JohnZed merged commit 2b7f38a into rapidsai:branch-0.15 Jul 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants