Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do you run the tests? #297

Closed
psahgal opened this issue Jan 3, 2017 · 3 comments
Closed

How do you run the tests? #297

psahgal opened this issue Jan 3, 2017 · 3 comments

Comments

@psahgal
Copy link

psahgal commented Jan 3, 2017

My team and I have been working on a project using Starscream, and we've had to make some minor modifications for our use case, as well as some bug fixes. We were hoping to submit some of those changes for a pull request. I wanted to run the tests before submitting, but I'm having trouble getting the tests to pass, even before applying the changes we've made. Can you explain how to run the test suite for this project?

I noticed the project is using the Autobahn test suite, so I set that up and ran the AutobahnTest project. However, I'm noticing some failures on test cases, so I'm not sure what I'm doing wrong. There was no information on what mode to run the wstest command provided by the Autobahn test suite, so I tried wstest -m fuzzingserver and it seemed to work: some, but not all, of the test cases pass. Am I doing something wrong? Are there some outstanding issues with getting the library to pass all of the Autobahn test cases?

Thanks!

@acmacalister
Copy link
Collaborator

What test are failing? Running the fuzzing server is correct. Every once and while, some of the test will fail at random. Maybe like one or two, but it is never consistent which ones will fail, if any. Have you looked at AutobahnTest in the examples folder?

@psahgal
Copy link
Author

psahgal commented Jan 9, 2017

@acmacalister I ran the fuzzing server and AutobahnTest twice. The first run, I saw 249 failures out of 519 cases. The second time, I saw 250. They're not the same tests. Is this consistent with what you're seeing on your end? (I'm guessing it's not...)

@daltoniam
Copy link
Owner

Ah, this is our fault with the logging in the Autobahn tester app. The results we check are under client reports at http://localhost:8080. The logging on that test app wasn't finished to handle all the cases so it will print failures for tests that aren't implemented (the compression ones of 12 & 13).

TLDR;
check http://localhost:8080 for results instead of this print in the autobahn tester app 😄.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants