Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

discussion: Should we check return values when there is an error? #275

Closed
petertseng opened this issue Apr 7, 2016 · 29 comments
Closed

Comments

@petertseng
Copy link
Member

While making #274 for binary I realized that I had accidentally checked the expected output value in the case where an error is expected (Preserved at https://github.com/petertseng/xgo/commit/6ac976c6b2ac73e6dd03ccece31541e96ff03595, as opposed to what's currently on #274). That means if there is an error, you always had to return 0, err and not something else like 9001, err since the test will still expect that you return 0 even if there is an error.

But should that matter and should it be checked? I think the usual contract is "if there is a non-nil error, assume nothing about the normal output value", including not assuming it's the zero value for the type. The student shouldn't necessarily have to return any particular value in an error case.

This is probably pretty low priority, since I assume any test case that checks the output value in an error case is checking that it's the zero value and that students will probably independently choose to return the zero value anyway. But it's worth a discussion.

If we agree we shouldn't check the other output in error cases, I'll look through our tests and see which tests should be changed.

@kytrinyx
Copy link
Member

kytrinyx commented Apr 7, 2016

This is a good question.

I think the usual contract is "if there is a non-nil error, assume nothing about the normal output value"

Yes, I completely agree. I think that we should make sure that the second return value is an error type, though. I've seen people return something that is not an error where I would have expected it to be an error. And then, of course, check that it is not nil (but I don't think we should make any assumptions about the error beyond that.

@petertseng
Copy link
Member Author

Okay. Next thing to do is to write up the list of exercises in which to make this change. I'll likely close this issue and open separate issues for each exercise so each can be closed individually.

@kytrinyx
Copy link
Member

kytrinyx commented Apr 7, 2016

That sounds good.

@petertseng
Copy link
Member Author

Oh! One thing I thought about when deciding how to do something you mentioned:

I think that we should make sure that the second return value is an error type, though.

Is the work done by the type system sufficient, or is there more to be done here? I believe that if the type is error, you wouldn't be able to return, say, a string or int in its place?

@kytrinyx
Copy link
Member

kytrinyx commented Apr 7, 2016

That depends--I've we've defined the function or the type somewhere, then yeah, the type system is enough. If the only thing we've specified is that we expect two return values, then we could easily end up with strings or ints there.

@petertseng
Copy link
Member Author

Ah you're right, since the tests just checks == nil or not! I didn't even think of that. OK, we could get the type system to help us in the tests by explicitly having a var err error and assigning the error return value to that - this will ensure that it is indeed an error type.

This might warrant an entire-track sweep. I wonder if we should add a mode of operation to blazon that posts one issue for each problem in a single track...

@kytrinyx
Copy link
Member

kytrinyx commented Apr 7, 2016

Ah, I like the var err error approach. That would fix the problem that I was worried about.

I wonder if we should add a mode of operation to blazon that posts one issue for each problem in a single track...

Yes, that is brilliant :)

I have to say that even though I've been messing up a bit with blazon (figuring out the process) it has made it so much easier to start dealing with some of these issues. Today and yesterday I went through all of the issues in x-common and made blazon issues for the ones that I thought were clear/good candidates, and I simply would not have done this if we didn't have blazon.

@petertseng
Copy link
Member Author

A little intimidating to think that this will create 72 new issues, since we have that many exercises. I hope that Github won't mind that many requests being sent in a short time.

I also hope that it will turn out to be a good way to manage the work to be done for this. Will be an interesting experiment.

@kytrinyx
Copy link
Member

kytrinyx commented Apr 8, 2016

I've been rate-limited by GitHub on occasion, but only when I don't use a personal access token.

Should we create a label for these issues? I think that would make it possible to do a search on the repository for all the open issues without that label.

@petertseng
Copy link
Member Author

I think that would make it possible to do a search on the repository for all the open issues without that label.

Confirm that I can search -label:enhancement and it will exclude the one issue with that label.

Should we create a label for these issues?

Seems good. Good ideas seem not to be coming to me right now; do you have any?

per-exercise?

@kytrinyx
Copy link
Member

kytrinyx commented Apr 8, 2016

per-exercise sounds good to me! If we come up with something better later we can edit the label, but nothing comes to mind immediately.

@petertseng
Copy link
Member Author

Ah, right, we're always able to change our minds (so long as when we rename the label, issues with the label get the new label name). OK that's good, I have created https://github.com/exercism/xgo/labels/per-exercise then.

Let's take the time to draft up an issue text?

(n.b. it's assumed by me that -track mode prepends the exercise slug to the subject line of each ticket it submits)


Title:
(exercise slug): Improve error-handling behavior

Body:

As part of #275, we'd like to more closely follow idiomatic Go behavior in the face of errors.

If the exercise indicated in the title of this issue does not have error handling, then this ticket can be closed immediately.

If it does, then follow the below steps:

Ensure err return value is of type error

It's idiomatic for the error return value to be of type error.

Most tests simply check whether err == nil or not. If we don't verify the type of err, it could be any type that has nils.

To lean on the type system, add the following line after assigning err:

var _ error = err

See the hamming tests for an example.

Ignore the normal return value when an error expected

The contract in Go is "if there's an error, assume nothing about the normal return value". Thus, we should only check the non-error return value when there is no error.

In some exercises we found that we were checking the normal return value, even when there was an error.

See the binary tests for an example of the behavior that we want.

Increment the test version

These changes break backwards compatibility. Bump the test version to let reviewers know that the exercise has changed.

This requires changing two constants:

  • targetTestVersion in the test file (*_test.go)
  • testVersion in the reference solution (example.go)

If the exercise does not have a test version, then add one, setting the value to 1.

See the version test in leap for an example.


There are potentially three sub-tasks per exercise, but they're all related and I didn't want to create 3*72 tickets.

@petertseng
Copy link
Member Author

(Above post was edited to add guidance on what to do if an exercise doesn't currently have a test version)

You'll notice that I wrote the issue text in a style that targets contributors who may not yet be familiar with the track. This is because I expect to need some help on these.

@kytrinyx
Copy link
Member

kytrinyx commented Apr 9, 2016

You'll notice that I wrote the issue text in a style that targets contributors who may not yet be familiar with the track. This is because I expect to need some help on these.

Yeah, I think that wherever possible, we should aim to do this. I'd love to make it easy for people to jump in and grab small, isolated issues.

@kytrinyx
Copy link
Member

kytrinyx commented Apr 9, 2016

Here are some suggested edits, I mostly tried to make the most important point the first thing, and then separate out the instructions so that they're a bit more scannable. In the links, I tried linking to the part of the file that was most relevant, so they don't have to hunt for the most important information.


Ensure err return value is of type error

It's idiomatic for the error return value to be of type error.

Most tests simply check whether err == nil or not. If we don't verify the type of err, it could be any type that has nils.

To lean on the type system, add the following line after assigning err:

var _ error = err

See [link to hamming] for example.

Ignore the normal return value when an error expected

The contract in Go is "if there's an error, assume nothing about the normal return value". Thus, we should only check the non-error return value when there is no error.

In some exercises we found that we were checking the normal return value, even when there was an error.

See the binary tests for an example.

Increment the test version

These changes break backwards compatibility. Bump the test version to let reviewers know that the exercise has changed.

This requires changing two constants:

  • targetTestVersion in the test file (*_test.go)
  • testVersion in the reference solution (example.go)

If the exercise does not have a test version, then add one, setting the value to 1.

See the version test in leap for an example.


@petertseng
Copy link
Member Author

I like these because they're shorter but still get the point across (remember the quote about perfection achieved when you have nothing left to take away).

I noticed that in general you tended to put a single motivating sentence first, then follow with what might need to be fixed. In contrast I tended to put both of these two things together in one paragraph, with the motivating sentence as the last sentence in it. In this case I think making the single sentence separate makes it clearer why we are doing this, so I took it.

(Edit: And now I see that I missed that you explained that yourself because I looked at the email which contained the first version of your comment before you had edited that in. Heh)

I of course made the link to hamming an actual link now that the line is merged.

The one thing I would like to improve is that I would like "see the binary tests" to be clear that binary demonstrates what we want, rather than what we don't want (this is important to me because the "for an example" sentence comes right after "In some exercises..." which might cause the wrong interpretation.

@petertseng
Copy link
Member Author

Here's a thought... I may be able to get a more precise list of "what exercises have error handling?" by doing git grep -l error | cut -d/ -f1 | uniq. The idea here is that if an exercise has error handling, it is pretty certain that at least its example solution has error in it (since the example solution needed to have an error return value)

I think it's possible this list has false positives, but it at least has no false negatives, which is the important part. It's 31 exercises, so we get to save on 41 tickets. So I might try to use this list instead of the -track flag... or at least figure out if such a thing could fit into blazon.

@kytrinyx
Copy link
Member

kytrinyx commented Apr 9, 2016

(Edit: And now I see that I missed that you explained that yourself because I looked at the email which contained the first version of your comment before you had edited that in. Heh)

Gaah. I really need to learn to edit first, post second!

Yeah, I've been trying to get better at clarifying why something matters, and separating out the why from the how.

I may be able to get a more precise list

Yes, this is an excellent idea. So, since it's just one track, and we know which track it is, it might be easier to just use hub and a bash script. Give me a few minutes to write this up.

@kytrinyx
Copy link
Member

kytrinyx commented Apr 9, 2016

for dir in $(git grep -l error | cut -d/ -f1 | uniq); do
        MSG=$(cat ISSUE.md | sed -e "s/PLACEHOLDER/$dir/")
        echo "$MSG"
        hub issue create -m "$MSG"
done

If you stick run this from xgo/exercises/, and the text is pasted into ISSUE.md (with PLACEHOLDER where the problem slug should go), then this should work. It uses https://github.com/github/hub

@petertseng
Copy link
Member Author

Let's not forget the -l flag to add the per-problem label

@petertseng
Copy link
Member Author

OK, that takes care of that. I'm going to close this since the discussion has concluded. It will be useful to look at the above list of issues to see how far along we are, but we can reach this page via any of the individual issues as well

@kytrinyx
Copy link
Member

Let's not forget the -l flag to add the per-problem label

I actually woke up in the middle of the night thinking exactly that. First thing I did this morning was to come add that comment, and saw that you already had :)

@petertseng
Copy link
Member Author

I actually woke up in the middle of the night thinking exactly that

Someone may get this mental image of you suddenly sitting bolt upright in bed, wide-eyed, shouting, "THE ISSUES MAY HAVE BEEN CREATED WITHOUT THE LABELS AND LIFE WILL BE TERRIBLE"

Yes it was an exaggeration and I don't think it happened quite that way. And yes I know it probably wouldn't have been that bad since we can just script applying the labels after the fact.

I'm not just posting this comment to share that mental image, I'm also now musing that we may need to do such a thing - all these issues could benefit from having the "good first patch" or "first timers only" label (or whatever we decide) in addition to the per-exercise label.

Going to have to do something with https://developer.github.com/v3/issues/labels/#add-labels-to-an-issue

@kytrinyx
Copy link
Member

Yeah, and we could have searched for the exact title, and then done a "select all" and "apply label", which is why I waited until morning :)

I think we should use "good first patch" rather than "first-timers only", since "first-timers only" suggests that you should only do it if you never contributed before. The other one suggests that it makes a good first patch, but could just as well be your fourth or fifth or sixtieth.

@petertseng
Copy link
Member Author

I agree about "good first patch" as it seems less restrictive, and we don't have so many contributors that we need to restrict some to first-timers only.

@petertseng
Copy link
Member Author

FYI I labeled them all "good first patch" (thanks for pointing me to the fact that I can do it in web UI, I might have gone the long way and tried to do it via API otherwise...!)

@petertseng
Copy link
Member Author

Just coming in with some cleanup - I suspect these can be closed:

$ comm <(git grep -l error | cut -d/ -f1 | uniq) <(git grep -El 'func [A-Z].*error' | cut -d/ -f1 | uniq) -23
bank-account
circular-buffer
gigasecond
house
minesweeper
nucleotide-count
paasio
rna-transcription

Will check in a bit.

@petertseng
Copy link
Member Author

Oh, I forgot about the receiver, as in func (t *Thing) Hi() error.

$ comm <(git grep -l error | cut -d/ -f1 | uniq) <(git grep -El 'func (\([^(]*\) )?[A-Z].*error' | cut -d/ -f1 | uniq) -23 
bank-account
gigasecond
house
rna-transcription

@petertseng
Copy link
Member Author

Checked the issues and closed the ones that could be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants