-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scientific notation for integer literals #10154
Comments
I'm taking a stab at this. My plan is to implement support for u8, u16, u32, u64, u128, i8, i16, i32, i64, i128. |
You should be careful about 128-bit integers. We currently support those literals only in the 64-bit value range. See #8373 So essentially, for now they should use the same parsing logic as their 64-bit equivalent, just casted to the 128-bit type. |
@straight-shoota Thank you so much for the heads up on 128-bit integers. I'll review #8373 and try to implement it as you suggested here. |
I don't think this should be done. When someone reads an e in the number, in every other language it means a float. This is very confusing, and the use cases should be sparse. This is also the only case where a literal without a suffix means a float, but it can also have an integer suffix. |
Just my two cents: |
@konovod Both expressions are very different regarding precision. Using integer literals with scientific notation is the whole point of this proposal, because converting float literals is imprecise. |
You can always write the full integer, right? Any other language that has this feature? |
source: https://rosettacode.org/wiki/Literals/Integer (I have never heard of these languages before) |
Can you link to the part that shows that Jlang supports this? |
https://rosettacode.org/wiki/Literals/Integer#J
|
I'm also very curious why are we doing this when I personally never needed this, but I needed byte literals for chars like a thousand times. |
Is J-Lang dynamic? Do you have type declarations there? |
There are no type declarations, but it internally still uses ints (and falls back to floats on overflow) |
I thought so, but couldn't actually find example with loss. p 92e17.to_i64, Int64::MAX
p 18e18.to_u64, UInt64::MAX Outputs
|
9020650833e9.to_i64 == 9020650832999999488 Python code to find theseimport random
while True:
m = random.randint(1, 10000000000)
for e in range(4, 18):
expr = f'{m}e{e}'
x = eval(expr)
if x < 2**63 and not str(int(x)).endswith('000'):
print(expr, int(x)) |
Would you not write that particular integer by hand? |
Yes, you definitely would. All I'm saying is |
Currently scientific notation is only supported for float literals:
1e0 == 1.0
.For some use cases with large numbers, it would be great if you could use scientific notation for integer literals. That's more concise than a literal with many digits or casting a float literal to int.
The default scientific notation literals should probably continue to be floats. But a type suffix can be used to designate a different type but it only works for float types:
1e0f32, 1e0f64
.For integers it would just be
1e0i32
(etc.). Obviously only positive exponents would be valid for integer types.Inspiration from https://forum.crystal-lang.org/t/constants-and-compiler/2814/3
The text was updated successfully, but these errors were encountered: