Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rational Numbers and Division by Zero #1681

Open
brocla opened this issue Oct 7, 2020 · 12 comments
Open

Rational Numbers and Division by Zero #1681

brocla opened this issue Oct 7, 2020 · 12 comments

Comments

@brocla
Copy link

brocla commented Oct 7, 2020

Hi. First time here.

Should the Rational Numbers test suite check that an exception is raised for division by zero?

Normally, I would say that it is a detail that doesn't add to the concept, but Rational Numbers is a very mathy exercise.

(I saw this on the Python track. Not sure if the tests are the same on the other tracks)

@SaschaMann
Copy link
Contributor

SaschaMann commented Oct 7, 2020

Hi and welcome! 👋

The behaviour of div by 0 in rational numbers is inconsistent across native language implementations of them. Python's fraction module throws an error, Julia's rational number type does not. 1/0 can be interpreted as +Inf and -1/0 as -Inf. I'd argue it's up to the language to decide (but it would be good test case to add once #1674 is resolved!)

@ErikSchierboom
Copy link
Member

After reading #1856, should this be closed?

@SaschaMann
Copy link
Contributor

SaschaMann commented Jan 19, 2022

I think #1856 still allows for test cases to cover this as two mutually exclusive cases, one where 1/0 equals some kind of infinity value and one where it throws an error. That way maintainers are pointed towards deciding which behaviour they want on their track.

Note that #1736 also touches on this.

@ErikSchierboom
Copy link
Member

I think #1856 still allows for test cases to cover this as two mutually exclusive cases, one where 1/0 equals some kind of infinity value and one where it throws an error.

Are you suggesting to add two test cases, one of which should only ever be implemented?

@SaschaMann
Copy link
Contributor

Yes

@ErikSchierboom
Copy link
Member

I'm not sure I really like that idea to be honest.

@SaschaMann
Copy link
Contributor

Why not?

@ErikSchierboom
Copy link
Member

We've never done that before and I find it unintuitive for implementers.

@SaschaMann
Copy link
Contributor

I don't think it's that much different than a scenario. You go through the cases one by one as you're already meant to do and then decide which one to implement. Here they'd both be right below each other and have a comment pointing out the exclusivity. I don't see how that is less intuitive than deciding whether to implement any other test case or figuring out which reimplemented case to use.

@ErikSchierboom
Copy link
Member

Well, let's just ask others what they think. CC @exercism/reviewers

@petertseng
Copy link
Member

petertseng commented Jan 25, 2022

If I understand correctly, the core desire here is: For this input, the implementing track may make its own decision as to what they want the output to be.

Remember back in the days where I tried to do this for all-your-base by codifying the choice in the expected object in #1002 ? Okay, that didn't receive much support. Instead, what actually did get support was making a single choice in #1001 and making these below things clear (it does require that the person implementing the exercise for the track read the comments of the JSON file though!):

  • What decision is there to be made?
  • What default choice has the JSON file taken?
  • How do you identify situations where this decision is applicable?
  • What is the alternative choice that a track could make? By implication, if a track wishes to make this alternative, they do it by identifying via the above bullet point where the default choice was made and then having the test generator or the person manually writing the tests make the alternative choice.

Not going to require anyone to do it this way since this stuff is all in the past and maybe we have some new understandings of ways to do things that mean a different way is better.

I understand that it's possible some may object to doing it this way, as making a default choice would seem to privilege it over the alternative, the objection being that tracks may just blithely go with the default instead of making their own reasoned decision. If you want to avoid that, maybe you could try doing something like #1002 then...

"expected": {
  "track_specific_decision": [
    "infinity",
    {"error": "division by zero"},        
  ]
}

@SaschaMann
Copy link
Contributor

SaschaMann commented Jan 25, 2022

Thanks for summarising and linking existing discussion :)

Not going to require anyone to do it this way since this stuff is all in the past and maybe we have some new understandings of ways to do things that mean a different way is better.

I think the key difference is that nowadays, tracks must1 decide whether to include each case individually anyway. Therefore there is no default in the same sense as it existed back then.

Footnotes

  1. "Test cases must all be considered optional, insomuch that a track should determine per test case whether to implement it or not."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants