-
Notifications
You must be signed in to change notification settings - Fork 30
integer type style guidelines #24
Comments
Why I also disagree with the claim that you should use signed integers. The stated reasons to do so from that C++ Style Guide do not apply to Rust. The first reason is it's easy to write a bad decrementing loop, but in Rust we don't use that type of |
The experience with Since Rust avoids some of the problems with unsigned integers, that reduces the risks with unsigned integers. Still, underflow can happen when subtracting unsigned integers. Typing the values unsigned makes underflow harder to detect. If subtraction quietly wraps around, will you catch that with an assertion? |
The only compelling argument I see against using Underflow can happen when subtracting unsigned integers, yes, but it can also happen when subtracting (or adding!) signed integers too. If you're worried about overflow/underflow, we have a set of "checked" math operations in std::num. And of course the main reason why this is an issue in C is writing |
RFC PR #161 proposes a 32-bit minimum size for If the style guide recommends checked math operations for cases like unsigned subtraction and programmers remember to do that, that would be a fine solution. One advantage of signed integers is you can check an aggregate computation once. Also it's more obvious what happened when an array bounds check complains about a negative value rather than a huge value. |
The fundamental problem with signed integers is it adds runtime failure to a whole slew of places that would otherwise be unnecessary with unsigned integers. Basically, for every use of an unsigned integer today, if you convert it to take a signed integer, you likely need to add an |
Converting a large, unsigned value to a too-narrow signed integer is an example of overflow/underflow that can happen during any arithmetic/conversion operation with integers that are too narrow. In theory we avoid that by picking wide enough integer types (or Better yet, we'd be able to turn on checked math to detect these cases without every programmer manually adding comprehensive assertions. Then, unsigned types provide the documentation and the assertions, so when a program decrements too far or subtracts offsets in the wrong order, they'll find out when it's easy to debug and before collateral damage. I vote for this. Lacking checked math, we'd better pick integer types that can hold the range of intermediate values. When unexpected negative values are more likely than unexpectedly huge values, a sign bit is more useful than another magnitude bit. That's my thought. |
I see a lot of compeling arguments.
|
What we decide here is going to have an impact on the interface to some of the libraries, such as Nominating, suggest 1.0 P-backcompat-libs. (Oops, thought this was in the |
What is an argument for using |
RFC PR #161 proposes some style guidelines, but the details depend on the acceptance and implementation of that RFC and/or RFC PR #146: Scoped attributes for checked arithmetic.
If RFC PR #161 is not accepted, I'd suggest these style guidelines:
int
anduint
types only for array indexing and similar purposes, that is, when you need number ranges that scale up with memory capacity. Despite the familiar names, these are not the "default," "native," or fastest integer types. They might not be the same size as Cint
. They could be 16 bits in small embedded devices.i32
.A style guideline on using unsigned types for numbers that should never be negative:
That's because C lacks overflow detection. To quote Gábor Lehel:
Depending on the RFC PR #161 choices, the types
int
anduint
might be renamed to, sayindex
anduindex
, or specified to always be at least 32 bits. In the latter case, it's less bad to useint
anduint
for arithmetic.The text was updated successfully, but these errors were encountered: