-
-
Notifications
You must be signed in to change notification settings - Fork 546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add crypto-square.json #250
Conversation
Mentioned in exercism#190, but I don't see that this has its own issue Currently 18 tracks implement this exercise. I did [a survey](https://gist.github.com/IanWhitney/4912b100cca0b02a5c3ce21096a3e673) of their test suites and found that they fell into two types: Type One: Test Just Encoding -- - [python](https://github.com/exercism/xpython/blob/master/exercises/crypto-square/crypto_square_test.py) - [lisp](https://github.com/exercism/xlisp/blob/master/exercises/crypto-square/crypto-square-test.lisp) - [go](https://github.com/exercism/xgo/blob/master/exercises/crypto-square/crypto_square_test.go) - [elixir](https://github.com/exercism/xelixir/blob/master/exercises/crypto-square/crypto_square_test.exs) Are the tracks that followed this approach. Just an `encode` method (or some alias) is implemented and tested. Type Two: Test Intermediate Methods -- The remaining 14 tracks followed this style. There's not a lot of variation between these suites. [Ruby](https://github.com/exercism/xruby/blob/master/exercises/crypto-square/crypto_square_test.rb) is a good, representative example. In these test suites several methods are implemented and tested: - `normalized_plaintext` - `size` - `plaintext_segments` - `ciphertext` - `normalized_ciphertext' Again, exact method names may vary. In implementing this json file I followed the second type. I did this for a few reasons: *It's already the majority*: With 14 of 18 tracks already implementing tests like this, there is some value in following the crowd. *I think it's the best approach*: This one is more subjective, obviously. My problem with the Test Just Encoding approach is that there's a huge gap between starting the exercise and getting a useful passing test. Students have to implement the full algorithm to get tests passing. By breaking the steps down in to smaller methods, each with their own tests, the lag time between starting to code and getting a passing test is smaller. And the tests are ordered so that each new method builds on the methods already built. The downside of this approach, I think, is that we're doing a lot of the design up front. In the Test Just Encoding approach students can implement the algorithm using as many methods as they want. In the Test Intermediate Methods approach, students end up locked to the methods defined in the test suite. In this case I think the trade off is worth it. But that's just my opinion. My kata group also worked through this exercise. 3 people did it in Test Just Encoding languages (Elixir and Python). 2 people did it in Test Intermediate Methods languages (Ruby and Javascript). Their opinions largely mirrored mine. Those that used Just Encoding found it a lot of work to get the 2nd test to pass (since the first test encodes an empty string). But once they got the 2nd test to pass, all tests passed. Those who used the Intermediate Methods approach found the steps between tests easer to manage and though that this approach was better for learning. Though, as an argument for Just Encoding, the Python people were impressed at the variety of designs people used to solve the problem. And our Elixir programmer liked that they could make up their own mind about internal implementations. A suggested middle ground was to have one exercise offer a Intermediate Methods test suite, while a later exercise could cover similar ground with a more free-form Just Encoding test suite. Removed Tests --- I dropped one set of tests that existed in the Test Intermediate Methods approach: `size`. I didn't see a reason for this method. I don't see it being used as part of a 'real' crypto library (though if your real crypto library is using Crypto Square then you probably have other problems). And I didn't see that testing it offered any useful tests not already provided by the `plaintext_segments` tests. Tweaked Method Names --- Method naming varies between current implementations and the Readme. I've tried to use method names that follow the readme. My methods - normalized_plaintext - plaintext_segments - encoded - ciphertext Terms used in the Readme - 'input is normalized' - 'plaintext in a rectangle' - 'encoded text' - 'cyphertext' or 'encoded text chunks' `plaintext_segments` is the method name that deviates most from the readme. It comes from the current implementations and I could not think of a better name. Names = hard.
I looked through this and think it looks good. I compared to the Ruby track and only saw a minor difference. I think it will work well. I like the idea of doing even a similar or same exercise with a different driving set of tests, so guiding in the intermediary steps for the first time, and then allowing as full creativity as possible the second time is good. The second time, you are familiar with the problem domain, and not fighting with restrictions that may initially be necessary for sanity, but freeing when you don't have them for the second time. |
Ok. That's one vote. I'll let this sit until Monday to see if any other @exercism/track-maintainers have suggestions. If it looks good, I'll merge! |
Interesting discussion about whether to test intermediate methods or not. I made the opposite decision in largest series product a while back, but the circumstances there may differ from here. I did this exercise and didn't feel terribly about having the intermediate methods be tested. Why? I'm not sure. Maybe it's because I thought there probably wasn't going to be that much interesting variation. However you've told me I'm wrong:
For this exercise, I think I'm most convinced by you saying testing the intermediate methods is better for learning. |
{ | ||
"description": "4 character plaintext results in an 2x2 rectangle", | ||
"plaintext": "Ab Cd", | ||
"expected": "[ab, cd]" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, this is a single string representing the list. I am wondering whether it would be easier for languages to handle it if it were a list of strings: ["ab", "cd"]
I think I've addressed @petertseng's comments. If there's anything else, let me know! |
] | ||
}, | ||
"plaintext_segments": { | ||
"description": "The plaintext should be organized in to a rectangle. The size of the rectangle (`r x c`) should be decided by the length of the message, such that `c >= r` and `c - r <= 1`, where `c` is the number of columns and `r` is the number of rows.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indentation, I think (or my viewer is being weird)
Ran the whole file through jq to fix the formatting. |
This looks good to me now. |
I'm ready to say 👍 (with squash) |
Oh, was that the fancy new 'merge with squash' thing? I still haven't used that yet. Nice. |
Yep. Trying to find one with rutabaga, but so far just squash. |
Just like a squash, it presented with a lot of information, some useful, some not so useful. If you don't expand the textarea you may miss it. It is but an irritation, you just have to remember. |
Mentioned in #190, but I don't see that this has its own issue
Currently 18 tracks implement this exercise. I did a survey of their test suites and found that they fell into two types:
Type One: Test Just Encoding
Follow this approach. Just an
encode
method (or some alias) is implemented and tested.Type Two: Test Intermediate Methods
The remaining 14 tracks followed this style. There's not a lot of variation between these suites. Ruby is a good, representative example.
In these test suites several methods are implemented and tested:
normalized_plaintext
size
plaintext_segments
ciphertext
Exact method names vary.
What I did
In implementing this json file I followed the Test Intermediate Methods approach. I did this for a few reasons:
It's already the majority: With 14 of 18 tracks already implementing tests like this, there is some value in following the crowd.
I think it's the best approach: This one is more subjective, obviously. My problem with the Test Just Encoding approach is that there's a huge gap between starting the exercise and getting a useful passing test. Students have to implement the full algorithm to get tests passing.
By breaking the steps down in to smaller methods, each with their own tests, the lag time between starting to code and getting a passing test is smaller. And the tests are ordered so that each new method builds on the methods already built.
The downside of this approach, I think, is that we're doing a lot of the design
up frontin the test suite. In the Test Just Encoding approach students can implement the algorithm using as many methods as they want. In the Test Intermediate Methods approach, students end up locked to the methods defined in the test suite.In this case I think the trade off is worth it.
But that's just my opinion. My kata group also worked through this exercise. 3 people did it in Test Just Encoding languages (Elixir and Python). 2 people did it in Test Intermediate Methods languages (Ruby and Javascript).
Their opinions largely mirrored mine. Those that used Just Encoding found it a lot of work to get the 2nd test to pass (since the first test encodes an empty string). But once they got the 2nd test to pass, all tests passed.
Those who used the Intermediate Methods approach found the steps between tests easer to manage and thought that this approach was better for learning.
However, as an argument for Just Encoding, the Python people were impressed at the variety of designs people used to solve the problem. And our Elixir people liked that they could make up their own mind about internal implementations.
A suggested middle ground was to have one exercise offer a Intermediate Methods test suite, while a later exercise could cover similar ground with a more free-form Just Encoding test suite.
Removed Tests
I dropped one set of tests that existed in the Test Intermediate Methods approach:
size
. I didn't see a reason for this method. I don't see it being used as part of a 'real' crypto library (though if your real crypto library is using Crypto Square then you probably have other problems). And I didn't see that testing it offered any useful tests not already provided by theplaintext_segments
tests.Tweaked Method Names
Method naming varies between current implementations and the Readme. I've tried to use method names that follow the readme.
My methods
Terms used in the Readme
plaintext_segments
is the method name that deviates most from the readme. It comes from the current implementations and I could not think of a better name. Names = hard.