Skip to content
This repository has been archived by the owner on Feb 28, 2018. It is now read-only.

Use Mixer for reimbursement view tests #171

Merged
merged 6 commits into from
May 28, 2017
Merged

Use Mixer for reimbursement view tests #171

merged 6 commits into from
May 28, 2017

Conversation

caiocarrara
Copy link
Contributor

This change makes use of mixer[1] for fixtures in Reimbursement view tests.

[1] - https://github.com/klen/mixer/

@caiocarrara
Copy link
Contributor Author

@cuducos take a look. More than a pull request this is a proposal to improve tests.

@coveralls
Copy link

Coverage Status

Changes Unknown when pulling a35b6d9 on cacarrara:use-mixer-for-fixtures into ** on datasciencebr:master**.

@caiocarrara caiocarrara changed the title Use mixer for reimbursement view tests Use Mixer for reimbursement view tests May 26, 2017
@jtemporal jtemporal requested a review from cuducos May 26, 2017 11:39
Copy link
Collaborator

@cuducos cuducos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @cacarrara — that's awesome! As I told @berinhard elsewhere I was kind of lazy/accommodated in addressing fixtures with a proper tool. mixer sounds like a really concise and straightforward way to address it! Many thanks for that.

I made some comments. Most of them concerns this PR. Some are more overarching towards the project (say, using mixer everywhere). Please, let me know if you want to handle these overarching edits, otherwise I wait for the changes related strictly to this PR, then merge to a secondary branch where I can cover the other tests before bringing it to master.

.travis.yml Outdated
@@ -37,6 +37,7 @@ install:
- psql -U postgres -c 'create database "jarbas";'
- yarn install
- python -m pip install -r requirements.txt coveralls
- python -m pip install -r requirements-test.txt coveralls
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're trying to install coveralls twice. I'd change the strategy here:

  • Rename requirements-test.txt to requirements-dev.txt
  • Include -r requirements.txt as the first line of in the new requirements.txt
  • Move django-test-without-migrations from requirements.txt to requirements-dev.txt
  • Use only requirements-dev.txt in .travis.yml


from django.core.cache import cache
from django.shortcuts import resolve_url
from django.test import TestCase

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary line: mixer can be imported in the sabe block of Django import (third party imports, according to PEP8)

for fixture in data:
Reimbursement.objects.create(**fixture)

mixer.cycle(5).blend(Reimbursement)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually the 5 reimbursements created before had controlled differences to make the tests more precise. Surely the previous code was poor and yours is way better. So… just to confirm: 5 here is quite arbitrary (before we need five due to the combination of differences to properly test API requests), right? I mean… when we need actually with a specificity now we create on the test itself, not during setUp, is that right? If so we might reduce from 5 to… whatever, 3. What do you think?

)
url = '{}?{}'.format(self.url, urlencode(search_data))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❤️

from jarbas.core.models import Reimbursement
from jarbas.core.tests import sample_reimbursement_data, suspicions
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're replacing this fixture, we can cleanup jarbas/core/tests/__init__.py and use mixer in other tests that depends on them… using mixer sometimes IMHO tends to lead to a bigger mess ; )

if not hasattr(self.reimbursement, result_attr):
continue
expected_value = getattr(self.reimbursement, result_attr)
self.assertEqual(str(result_value), str(expected_value))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic is pretty clever but I think it makes it less precise what we're are actually testing… IMHO it's a good idea to keep logic extremely simple in tests to make them obvious. That's why here I prefer the long dict: in a glimpse one can know what to expect as result of test_contents, the proper types etc. What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cuducos, good points. I'll gona try do not be too extense.

I do not agree completly about that. The test here should test that each API response attribute is equal to the object attribute in database. It's what being done properly. Of course:

response = load(api_response)
mydict = dict(one=1, two=2)
assert mydict == mydict

is in a first glance such more obvious than:

response = load(api_response)
mydict = dict(one=1 two=2)
for key, value in mydict.items():
	return assert response[key] == value

However I still think the positive consequences are higher than negative ones. Think about changing (for any reason) the response content structure (not values). If we change an attribute name or even any attribute data type. We'll have to adapt the tests only because we change some implementation specificity, not a business rule, nor an application requirement. On the other hand, if we change some business logic that makes API response different from object attribute value, the test still will fail.

I think the tests should be abstract and resilient enought to test requirements properly without being a pain for developers.

It's just my thoughts, let me know what you think.

ps.: about this topic, I would recomend this post: http://blog.cleancoder.com/uncle-bob/2017/05/05/TestDefinitions.html

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we change an attribute name or even any attribute data type. We'll have to adapt the tests only because we change some implementation specificity, not a business rule, nor an application requirement.

If we change an attribute name we broke the JSON API. So… yep, I think a failing test would be welcomed in that scenario ; )

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition to my former comment:

The test you're suggesting is indeed interesting, but as a test_context (i.e. testing the data we're getting from the model and passing to the template). But with the class-based view from DRF this step is abstracted (and consequentially not tested).

What test_contents is actually testing is if the JSON output is what our frontend expects. So even if we change some attribute name, we can do some tweaks to avoid breaking compatibility with the frontend (i.e. tests will pass because JSON won't change) — or, surely better, have a failing test to remind us we've broken something and the JSON is not what we expected it to be anymore, so we can adapt the frontend and our expectations….

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got your point and you're right. I was considering the test_content as an unit test, but it's an integration test actually. I'm going to do it.

content = loads(resp.content.decode('utf-8'))
self.assertEqual(expected, content)

@patch('jarbas.core.models.head')
def test_fetch_non_existing_receipt(self, mocked_head):
mocked_head.return_value.status_code = 404
cache.clear()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any specific reason to remove that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had understood it was only preparing test to fetch an exiting receipt from a reimbursement without a receipt. So I did the proper preparation instead of clearing cache, what seemed to be an workaround. Since now is easy and simple create new objects I did it.

Was there any other reason to clear the cache?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hum… if for some reason the URL was accessed before the cache might fools the test. As we have no control of tests ordering, this might happen… but it's not a sure thing. Let's remove it and if this become a problem we put it back ; )

return list(map(lambda x: cast(x), parts)) if cast else parts
except ValueError:
# this is far from good
return list()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming the except ValueError might happen in cast, right? If so, I don't think we should silence this error. If the user is trying to convert something unconvertible it's better to stop temh than to loose data silently.

@caiocarrara
Copy link
Contributor Author

@cuducos do you think we should change all tests within only one PR? I think we could do it one PR by test file. It could be easier and better to review/approve so we could minimize chances for errors.

Have temprorarily only some tests running with mixer seems do not raise any issue.

I can handle the tests refactoring.

Copy link
Collaborator

@cuducos cuducos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cuducos do you think we should change all tests within only one PR?

That possibility crossed my mind… but we can split that in different PRs, no problem.

content = loads(resp.content.decode('utf-8'))
self.assertEqual(expected, content)

@patch('jarbas.core.models.head')
def test_fetch_non_existing_receipt(self, mocked_head):
mocked_head.return_value.status_code = 404
cache.clear()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hum… if for some reason the URL was accessed before the cache might fools the test. As we have no control of tests ordering, this might happen… but it's not a sure thing. Let's remove it and if this become a problem we put it back ; )

@coveralls
Copy link

Coverage Status

Changes Unknown when pulling 5268291 on cacarrara:use-mixer-for-fixtures into ** on datasciencebr:master**.

@caiocarrara
Copy link
Contributor Author

@cuducos and @jtemporal the changes was done. Take a look 😜

@cuducos
Copy link
Collaborator

cuducos commented May 28, 2017

Wow! Many thanks, @cacarrara <3

@cuducos cuducos merged commit 92fb725 into okfn-brasil:master May 28, 2017
@cuducos cuducos mentioned this pull request May 28, 2017
@caiocarrara caiocarrara deleted the use-mixer-for-fixtures branch May 29, 2017 01:15
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants