-
-
Notifications
You must be signed in to change notification settings - Fork 657
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
discussion: Should we check return values when there is an error? #275
Comments
This is a good question.
Yes, I completely agree. I think that we should make sure that the second return value is an error type, though. I've seen people return something that is not an error where I would have expected it to be an error. And then, of course, check that it is not nil (but I don't think we should make any assumptions about the error beyond that. |
Okay. Next thing to do is to write up the list of exercises in which to make this change. I'll likely close this issue and open separate issues for each exercise so each can be closed individually. |
That sounds good. |
Oh! One thing I thought about when deciding how to do something you mentioned:
Is the work done by the type system sufficient, or is there more to be done here? I believe that if the type is |
That depends--I've we've defined the function or the type somewhere, then yeah, the type system is enough. If the only thing we've specified is that we expect two return values, then we could easily end up with strings or ints there. |
Ah you're right, since the tests just checks This might warrant an entire-track sweep. I wonder if we should add a mode of operation to blazon that posts one issue for each problem in a single track... |
Ah, I like the var err error approach. That would fix the problem that I was worried about.
Yes, that is brilliant :) I have to say that even though I've been messing up a bit with blazon (figuring out the process) it has made it so much easier to start dealing with some of these issues. Today and yesterday I went through all of the issues in x-common and made blazon issues for the ones that I thought were clear/good candidates, and I simply would not have done this if we didn't have blazon. |
A little intimidating to think that this will create 72 new issues, since we have that many exercises. I hope that Github won't mind that many requests being sent in a short time. I also hope that it will turn out to be a good way to manage the work to be done for this. Will be an interesting experiment. |
I've been rate-limited by GitHub on occasion, but only when I don't use a personal access token. Should we create a label for these issues? I think that would make it possible to do a search on the repository for all the open issues without that label. |
Confirm that I can search -label:enhancement and it will exclude the one issue with that label.
Seems good. Good ideas seem not to be coming to me right now; do you have any? per-exercise? |
|
Ah, right, we're always able to change our minds (so long as when we rename the label, issues with the label get the new label name). OK that's good, I have created https://github.com/exercism/xgo/labels/per-exercise then. Let's take the time to draft up an issue text? (n.b. it's assumed by me that Title: Body: As part of #275, we'd like to more closely follow idiomatic Go behavior in the face of errors. If the exercise indicated in the title of this issue does not have error handling, then this ticket can be closed immediately. If it does, then follow the below steps: Ensure
|
(Above post was edited to add guidance on what to do if an exercise doesn't currently have a test version) You'll notice that I wrote the issue text in a style that targets contributors who may not yet be familiar with the track. This is because I expect to need some help on these. |
Yeah, I think that wherever possible, we should aim to do this. I'd love to make it easy for people to jump in and grab small, isolated issues. |
Here are some suggested edits, I mostly tried to make the most important point the first thing, and then separate out the instructions so that they're a bit more scannable. In the links, I tried linking to the part of the file that was most relevant, so they don't have to hunt for the most important information. Ensure
|
I like these because they're shorter but still get the point across (remember the quote about perfection achieved when you have nothing left to take away). I noticed that in general you tended to put a single motivating sentence first, then follow with what might need to be fixed. In contrast I tended to put both of these two things together in one paragraph, with the motivating sentence as the last sentence in it. In this case I think making the single sentence separate makes it clearer why we are doing this, so I took it. (Edit: And now I see that I missed that you explained that yourself because I looked at the email which contained the first version of your comment before you had edited that in. Heh) I of course made the link to hamming an actual link now that the line is merged. The one thing I would like to improve is that I would like "see the binary tests" to be clear that binary demonstrates what we want, rather than what we don't want (this is important to me because the "for an example" sentence comes right after "In some exercises..." which might cause the wrong interpretation. |
Here's a thought... I may be able to get a more precise list of "what exercises have error handling?" by doing I think it's possible this list has false positives, but it at least has no false negatives, which is the important part. It's 31 exercises, so we get to save on 41 tickets. So I might try to use this list instead of the |
Gaah. I really need to learn to edit first, post second! Yeah, I've been trying to get better at clarifying why something matters, and separating out the why from the how.
Yes, this is an excellent idea. So, since it's just one track, and we know which track it is, it might be easier to just use |
If you stick run this from |
Let's not forget the |
OK, that takes care of that. I'm going to close this since the discussion has concluded. It will be useful to look at the above list of issues to see how far along we are, but we can reach this page via any of the individual issues as well |
I actually woke up in the middle of the night thinking exactly that. First thing I did this morning was to come add that comment, and saw that you already had :) |
Someone may get this mental image of you suddenly sitting bolt upright in bed, wide-eyed, shouting, "THE ISSUES MAY HAVE BEEN CREATED WITHOUT THE LABELS AND LIFE WILL BE TERRIBLE" Yes it was an exaggeration and I don't think it happened quite that way. And yes I know it probably wouldn't have been that bad since we can just script applying the labels after the fact. I'm not just posting this comment to share that mental image, I'm also now musing that we may need to do such a thing - all these issues could benefit from having the "good first patch" or "first timers only" label (or whatever we decide) in addition to the per-exercise label. Going to have to do something with https://developer.github.com/v3/issues/labels/#add-labels-to-an-issue |
Yeah, and we could have searched for the exact title, and then done a "select all" and "apply label", which is why I waited until morning :) I think we should use "good first patch" rather than "first-timers only", since "first-timers only" suggests that you should only do it if you never contributed before. The other one suggests that it makes a good first patch, but could just as well be your fourth or fifth or sixtieth. |
I agree about "good first patch" as it seems less restrictive, and we don't have so many contributors that we need to restrict some to first-timers only. |
FYI I labeled them all "good first patch" (thanks for pointing me to the fact that I can do it in web UI, I might have gone the long way and tried to do it via API otherwise...!) |
Just coming in with some cleanup - I suspect these can be closed:
Will check in a bit. |
Oh, I forgot about the receiver, as in
|
Checked the issues and closed the ones that could be closed. |
While making #274 for binary I realized that I had accidentally checked the expected output value in the case where an error is expected (Preserved at https://github.com/petertseng/xgo/commit/6ac976c6b2ac73e6dd03ccece31541e96ff03595, as opposed to what's currently on #274). That means if there is an error, you always had to return
0, err
and not something else like9001, err
since the test will still expect that you return 0 even if there is an error.But should that matter and should it be checked? I think the usual contract is "if there is a non-nil error, assume nothing about the normal output value", including not assuming it's the zero value for the type. The student shouldn't necessarily have to return any particular value in an error case.
This is probably pretty low priority, since I assume any test case that checks the output value in an error case is checking that it's the zero value and that students will probably independently choose to return the zero value anyway. But it's worth a discussion.
If we agree we shouldn't check the other output in error cases, I'll look through our tests and see which tests should be changed.
The text was updated successfully, but these errors were encountered: