-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regression tests #75
Comments
We definitely need those tests. However, I would prefer these to be in python and not shell scripts. It could be a mix of unit and integration tests. |
one way of testing could be to have a test folder with text files containing expected Overpass output, and load them on test instead of actually calling Overpass |
As a preparation for writing automated tests, in this commit, we are creating a root module called osm2gtfs. Having a root module will make it easier to import classes in the test files. We added a new setup file in the root folder so we can install osm2gtfs with pip, since we are now defining the project dependencies there we removed the requirements file. Also, after installing this project with pip you can run the script by calling osm2gtfs in your terminal. Relates to #75
There has been great progress on this, made by @prhod and @nlehuby. There is a universal structure to include more tests and call them, also in our new CI. I'm a bit wondering how we will deal with tests that may fail because of changes in the OSM data base. At a first glance, they seem to be problematic, as they might start to fail, even though nothing has changed in code, or the changes don't cause the test to fail. I actually don't have a solution for this, but I think we need to think together to find a good solution. |
Aren't Overpass responses in those great tests mocked, so they are always the same? This should take care of this problem, right?
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
|
You are right @grote: the OSM data is mocked so we only test the code and not the data. |
Thanks for the clarification. Perfect, that we can mock it up like this. Regarding tests for creators/cities (like the Accra tests) and tests for core modules, I suggest to follow @jcfausto in his PR #78: We create two subdirectories in
The naming convention could be most easily: What do you think? |
Hello everyone, |
The PR on #83 introduces the testing directory structure for core components as discussed in this thread here. |
That's really good news ! 👍 👍 I think we should have regression tests for all creators in CI though. |
Here are tests for all creators: #134 |
Thaks a lot @xamanu. We should also update the doc to suggest to also write tests for the new creators. |
Yes, I agree @nlehuby, we should document how to create tests for new creators and encourage or maybe even make a condition to implement these before it can be integrated. What do you think? Updating here on the progress:
There is a discussion in #136 and it also effects #130: Do we want every creator to have tests or can we rely on only one, as long as they are using the standard creator implementation? |
I agree, we should have all integrated providers tested by CI |
All creators now have regression tests 🎉 Can we close this issue ? |
We need a way to make sure our changes and refactors do not break the current creators.
Maybe a shell script ...
The process could be, for each creator :
The text was updated successfully, but these errors were encountered: