Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regression tests #75

Closed
nlehuby opened this issue Oct 7, 2017 · 15 comments
Closed

Regression tests #75

nlehuby opened this issue Oct 7, 2017 · 15 comments

Comments

@nlehuby
Copy link
Collaborator

nlehuby commented Oct 7, 2017

We need a way to make sure our changes and refactors do not break the current creators.
Maybe a shell script ...

The process could be, for each creator :

  • launch osm2gtfs with mocked or persisted OSM input data
  • unzip the GTFS output file
  • display a diff of all the csv files
@grote
Copy link
Owner

grote commented Oct 7, 2017

We definitely need those tests. However, I would prefer these to be in python and not shell scripts. It could be a mix of unit and integration tests.

@Skippern
Copy link

one way of testing could be to have a test folder with text files containing expected Overpass output, and load them on test instead of actually calling Overpass

grote pushed a commit that referenced this issue Oct 19, 2017
As a preparation for writing automated tests, in this commit, we are
creating a root module called osm2gtfs.

Having a root module will make it easier to import classes in the test
files.

We added a new setup file in the root folder so we can install osm2gtfs
with pip, since we are now defining the project dependencies there we
removed the requirements file.

Also, after installing this project with pip you can run the script by
calling osm2gtfs in your terminal.

Relates to #75
@pantierra
Copy link
Contributor

There has been great progress on this, made by @prhod and @nlehuby. There is a universal structure to include more tests and call them, also in our new CI.

I'm a bit wondering how we will deal with tests that may fail because of changes in the OSM data base. At a first glance, they seem to be problematic, as they might start to fail, even though nothing has changed in code, or the changes don't cause the test to fail. I actually don't have a solution for this, but I think we need to think together to find a good solution.

@grote
Copy link
Owner

grote commented Dec 23, 2017 via email

@nlehuby
Copy link
Collaborator Author

nlehuby commented Dec 23, 2017

You are right @grote: the OSM data is mocked so we only test the code and not the data.

@pantierra
Copy link
Contributor

pantierra commented Dec 23, 2017

Thanks for the clarification. Perfect, that we can mock it up like this.

Regarding tests for creators/cities (like the Accra tests) and tests for core modules, I suggest to follow @jcfausto in his PR #78: We create two subdirectories in tests:

  • core - Where tests for the core modules come in.
  • creators - Where tests for the different creators/cities live.

The naming convention could be most easily: tests_<SELECTOR>.py for creators, like for example tests_gh_accra.py or tests_ni_managua.py; or in case of core modules it can be tests_<FILENAME>.py, like tests_osm_connector.py or tests_configuration.py.

What do you think?

@prhod
Copy link
Collaborator

prhod commented Dec 25, 2017

Hello everyone,
@xamanu : I agree with your proposal (2 directories, and the naming convention)
@grote and @nlehuby : to be precise, OSM Data is mocked for stops and routes, but the search for unnamed stops have been disabled because not necessary for Accra. It should be added, maybe as a core function unit test ?

@pantierra
Copy link
Contributor

The PR on #83 introduces the testing directory structure for core components as discussed in this thread here.

@pantierra
Copy link
Contributor

The tests for Accra have been implemented, they are also on the way for Managua (#131). I guess we can successfully close this issue. What do you think, @nlehuby?

@nlehuby
Copy link
Collaborator Author

nlehuby commented Mar 1, 2018

That's really good news ! 👍 👍

I think we should have regression tests for all creators in CI though.
But we can close this issue and create specific ones for the incofer and fenix ones.

@pantierra
Copy link
Contributor

Here are tests for all creators: #134

@nlehuby
Copy link
Collaborator Author

nlehuby commented Mar 4, 2018

Thaks a lot @xamanu.

We should also update the doc to suggest to also write tests for the new creators.

@pantierra
Copy link
Contributor

Yes, I agree @nlehuby, we should document how to create tests for new creators and encourage or maybe even make a condition to implement these before it can be integrated. What do you think?

Updating here on the progress:

There is a discussion in #136 and it also effects #130: Do we want every creator to have tests or can we rely on only one, as long as they are using the standard creator implementation?

@nlehuby
Copy link
Collaborator Author

nlehuby commented Apr 13, 2018

I agree, we should have all integrated providers tested by CI

@nlehuby
Copy link
Collaborator Author

nlehuby commented Feb 3, 2020

All creators now have regression tests 🎉 Can we close this issue ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants