Use npm install
to setup the project
Use ./run-tests.sh
to execute Unit tests
Use ./run-test-coverage.sh
to get test code coverage stats
during the refactoring process I have tried to apply some good practices like:
- remove code duplication and silly assignments
- having 1 level depth loops
- remove negated Ifs
the process that I followed was:
- first create a complete test harness, those tests had to guarantee that none of my refactors broke the current logic.
- once I thought that I have covered all possible requirements use cases I added the test coverage tool
- the stats showed 2 things:
- there were a 2 lines of code not being tested
- some code 'branches' where not covered ('else' paths)
- this was a huge 'delight' because the coverage tool exposed some gaps in my test harness
- I added a few more test to achieve 100% test coverage
- the stats showed 2 things:
- at this point I felt pretty comfortable to start the business logic refactor
- but prior to that, I applied refactoring to my test harness : regrouping and renaming them
- I applied refactoring until I felt the code was simple enough to easily understand and extend it
- Last part of the kata:
- I wrote a new test for the new requirement
- and added new single line of code to add the new logic
- this was my second 'delight' on this kata: see how easy was to introduce new business logic specially comparing it to the starting point
This Kata was originally created by Terry Hughes (http://twitter.com/#!/TerryHughes). It is already on GitHub here. See also Bobby Johnson's description of the kata.
I translated the original C# into a few other languages, (with a little help from my friends!), and slightly changed the starting position. This means I've actually done a small amount of refactoring already compared with the original form of the kata, and made it easier to get going with writing tests by giving you one failing unit test to start with. I also added test fixtures for Text-Based approval testing with TextTest (see the TextTests)
As Bobby Johnson points out in his article "Why Most Solutions to Gilded Rose Miss The Bigger Picture", it'll actually give you better practice at handling a legacy code situation if you do this Kata in the original C#. However, I think this kata is also really useful for practicing writing good tests using different frameworks and approaches, and the small changes I've made help with that. I think it's also interesting to compare what the refactored code and tests look like in different programming languages.
I wrote this article "Writing Good Tests for the Gilded Rose Kata" about how you could use this kata in a coding dojo.
The simplest way is to just clone the code and start hacking away improving the design. You'll want to look at the "Gilded Rose Requirements" which explains what the code is for. I strongly advise you that you'll also need some tests if you want to make sure you don't break the code while you refactor.
You could write some unit tests yourself, using the requirements to identify suitable test cases. I've provided a failing unit test in a popular test framework as a starting point for most languages.
Alternatively, use the "Text-Based" tests provided in this repository. (Read more about that in the next section)
Whichever testing approach you choose, the idea of the exercise is to do some deliberate practice, and improve your skills at designing test cases and refactoring. The idea is not to re-write the code from scratch, but rather to practice designing tests, taking small steps, running the tests often, and incrementally improving the design.
This is a testing approach which is very useful when refactoring legacy code. Before you change the code, you run it, and gather the output of the code as a plain text file. You review the text, and if it correctly describes the behaviour as you understand it, you can "approve" it, and save it as a "Golden Master". Then after you change the code, you run it again, and compare the new output against the Golden Master. Any differences, and the test fails.
It's basically the same idea as "assertEquals(expected, actual)" in a unit test, except the text you are comparing is typically much longer, and the "expected" value is saved from actual output, rather than being defined in advance.
Typically a piece of legacy code may not produce suitable textual output from the start, so you may need to modify it before you can write your first text-based approval test. That could involve inserting log statements into the code, or just writing a "main" method that executes the code and prints out what the result is afterwards. It's this latter approach we are using here to test GildedRose.
The Text-Based tests in this repository are designed to be used with the tool "TextTest" (http://texttest.org). This tool helps you to organize and run text-based tests. There is more information in the README file in the "texttests" subdirectory.
I've also set this kata up on cyber-dojo for several languages, so you can get going really quickly:
- JUnit, Java
- C#
- Ruby
- RSpec, Ruby
- Python
- Cucumber, Java - for this one I've also written some step definitions for you