Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tests #13

Open
edgarriba opened this issue Jul 21, 2016 · 7 comments
Open

Add tests #13

edgarriba opened this issue Jul 21, 2016 · 7 comments

Comments

@edgarriba
Copy link
Contributor

Should we add some tests? Does anyone know if travisCI supports OpenCL, CUDA, ... ?

@naibaf7
Copy link
Owner

naibaf7 commented Jul 21, 2016

@edgarriba
Yes it is possible to add OpenCL and CUDA tests with travisCI.
We'll have to look into this, but I'd need to add reference CPU kernels for testing first.

@edgarriba
Copy link
Contributor Author

oh, nice! But I think that we could start to test if the project is compiling with OpenCL and CUDA

@naibaf7
Copy link
Owner

naibaf7 commented Jul 21, 2016

@edgarriba
These tests won't help that much I think, as they are not representative of real world issues (such as linking an actual framework to libdnn, or getting it to work with a particular flavor of OpenCL/CUDA).
So you see, it's much more of an issue with particular setups that the users have. That is why I would prefer to rather add unit tests for kernel compilation & compare to CPU reference convolution instead of going with a typical travisCI setup.

@CNugteren
Copy link

CNugteren commented Jul 21, 2016

I agree. That's also what I have with CLBlast. I still have Travis CI set-up, but just to check if it compiles on the different platforms with the different compilers. Next to that I have test executable which users have to run on their own systems: each OpenCL device behaves differently.

There is still the option open to incorporate the libDNN kernels into the CLBlast framework. That will make the testing infrastructure and other things directly available (e.g. command-line parsing, error checking, build structure, auto-tuning).

@edgarriba
Copy link
Contributor Author

I agree with that. However, what I mean is what @CNugteren suggests, just check that the project is not broken and compiles on different platforms and compilers.

BTW, @CNugteren I've seen in your repo that you are using Catch for testing. Any advantages/disadvantages vs Gtest? We're just discussing about proper testing frameworks to include in tiny-cnn tiny-dnn/tiny-dnn#242

@naibaf7
Copy link
Owner

naibaf7 commented Jul 21, 2016

@CNugteren
Thanks. I think I'll keep libdnn standalone for now though.
Also interested about your experiences with Catch vs. Gtest, just like @edgarriba!

@CNugteren
Copy link

It's been a while since the latest GTest release, I thought it was no longer maintained, but now I do see a repository on GitHub, so things might have changed. Catch seems more modern, but there the author doesn't seem to be able to keep up with the pull-requests.

I used Catch in some of my projects and it works nicely, it feels cleaner than GTest. I also like the header-only approach - you don't have to require users to install GTest separately. However, I don't use Catch for CLBlast, it didn't have a nice way to test automatically over multiple data-types if I recall correctly. I have my own BLAS-specific testing infrastructure, which isn't that many lines of code to write actually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants