-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Int128 support #8373
Comments
My concern regarding 32 bits is not i386 arch but arm 32. Although we currently do not ship 32 bits if we cut the only 32 bits distro it might be hard to get it back. If we decide to drop 32 bits to a second citizen we will stop shipping the compiler in 32 bits and we will stop working on actively fixing things for it. Yet we can keep the existing codebase and accept PR to make it work based on contributions. Some might think that Int128 can be included only in 64 bits machines, but we don't want 100% Crystal apps and shards that will compile depending on the architecture. That is why either: a) 32 bits supports Int128, or So, voting time?
|
Last option means no Crystal on Raspberry Pi with Raspbian? 😞 |
@vlazar IIRC raspbian comes in 64-bit flavor already (for RPi >= 4) but that would be correct for older devices :( |
@Sija You are probably right and I have Pi3 :(
|
I'm in favour of supporting Int128 on 32bit architectures, but not at all costs. If it requires too much effort that keeps us away from working on other things, I suppose dropping 32bit is reasonable. However, I'm also fine with a delayed resolution, meaning we won't implement Int128 for 32bit right away but still keep 32bit as primary architecture for now (because it's costly to drop it and bring it back later). We'd still have to decide later whether we want to keep 32bit support up and invest in implementing Int128 or drop the architecture. We should also consider if there are other features besides Int128 which might let 32bits become more costly to support. This would impact the decision on Int128. |
I vote 😕 for discarding int128 on 32bit and ensuring Crystal works without it. Which might be the status quo. And yes, I see that this was excluded intentionally, I just disagree. |
@straight-shoota That means either embedding or porting compiler-rt. Both have their tradeoff I prefer porting at least as an initial default implementation that will likely be more portable. But going in this direction generated some questions. We need to be on the same page or accept the initial directions.
that is not an option @oprypin because that is actually the current status where Int128 is not a first class citizen. There are no literals. Shards and app that use public api std-lib compiles depending on the arch. We are trying to have a clear message on this feature. Either it works in the main supported platforms or is removed. If we remove the 32-bits and complete the support for Int128, cross-compiling from 64-bits to 32-bits should still be possible as long as compiler-rt is linked I think. So it might be not that terrible, but that lineage at least initially will not be managed by us in this scenario. |
I don't understand why the decisions are mixing Int128 and 32 bits support. Can we decide whether we want 128 int bits or not, and separately whether we want to support 32 bits platforms? My vote is remove Int128. I still don't know about 32 bits platforms. |
There is a bit of tension between both together with the overflow semantics. Yet, the overflow of Int64 in 32 bits platforms has the same issue, but since it's on a smaller scale nobody complained that much. But the criteria could be the same: porting Dropping 32 bits allows us to easily support 128bits in the compiler which might be a bottleneck for 32 bits later, but as long as compiler-rt is linked maybe we are fine. |
Is support for Int128 in the parser still planned before v1.0? I've noticed there hasn't been much traction here for a while, but having support for Int128 would really help me reduce my reliability on BigInt in a project I'm working on. |
No. |
this project has had a lot done on it and has had a few working PRs. It is too hard to review and breaking it into pieces has led to people not really being satisfied with merging it. Because the work is so old it probably has to be mostly redone but I dont think this will get merged without a plan and support from someone on the core team. |
The way I understand it is we're ultimately missing a decision on how to move on with 128-bit integers and 32-bit platforms. The options are as far as I can see (ordered by amount of 128-bit integer support): I think d) is pretty much out of question, I just included if for completeness. There's a broad consensus that Crystal should have support for 128-bit integers. All the other options seem viable to me. a) would certainly be the best solution, but it also requires a lot of work. b) is debatable whether we could accept supporting different language features on different platforms. Not sure if this is a good idea, but what about adding 128-bit integers behind a compile time flag? That would ensure the premise that crystal code compiles independent of the platform by default. But you can use 128-bit integers on supported platforms if you opt-in to that. |
Why? I added Int128 support on a whim because one day I was bored and I wanted to see how easy or hard it was. It worked right away... and they it turned out that it's really hard to achieve. If I had woken up with a different mood that day, there would be no Int128 support and nobody would be talking about it. Or at least talking about how to move it forward (maybe about how to initially implement it, but there wouldn't be anyone to do it). Go is extremely popular and it doesn't have Int128. Surely Int128 is not really needed? Of course it's useful, but I'm not sure how much. I wouldn't discard option d, even though everyone else discards it immediately. Again, I regret my past self and I wish I hadn't implemented it in the first place. |
Fair enough. It would nevertheless be one of the least appreciated options ;) |
I mean I would not be too upset. I think it is a really cool feature and I got involved because I found a use for it. If we found a path forward I would be interested in contributing and using it. You should wake up and do more of these. |
Personally I would love just to have support for arbitrarily sized integers like Zig is doing, but since that's probably out of the question I'd settle for just having Int128 be completely supported. If someone could give a basic breakdown of what would be required to get it completely supported I'd even be willing to put the work in myself. I guess the big question is how to come to a consensus as to which method is the right one. Obviously option a would be great, but it wouldn't be terrible to have limited or no support on 32 bit platforms. That could always change later, but in the mean time most everyone is probably developing and shipping on 64 bit platforms as it is. I guess the other option could be to make BigInt more of a first class citizen by supporting it in the parser somehow and making sure that you can do anything with BigInts that can be done with the Int primitives, but that seems like a bigger overall change. |
why do you want to leave support 32 bits platforms? they are dead |
@delef that is a good point. |
32 bit is definitely not dead. Dying, yes, but not dead. |
@watzon where is it used besides old raspberries? |
WebAssembly is 32-bit. |
@delef embedded, micro computers like the Raspberry Pi, WASM as @lbguilherme mentioned, and plenty of older hardware. Even if 32 bit platforms aren't generally being produced anymore, that doesn't mean they don't still exist and aren't in use. It's going to be a while yet before 32 bit is truly dead. |
There is an important aspect of Int128 support to discuss: what happens with large literals that don't have a suffix specified. Should literals implicitly be allowed to become Test code for literals, will be referenced belowmacro t(expr)
%x = {{`python -c "print(#{expr.id})"`}}
puts "| #{%x.to_s.rjust(40)} | #{{{expr}}.ljust(10)} | #{typeof(%x).to_s.rjust(7)} |"
end What we have nowTest codet "2**31 - 1" # Int32
t "2**31" # Int64
t "2**32 - 1" # Int64
t "2**32" # Int64
t "2**63 - 1" # Int64
t "2**63" # UInt64
t "2**64 - 1" # UInt64
#t "2**64" # error
Alternative 1 🎉(What #11571 implements) Test codet "2**31 - 1" # Int32
t "2**31" # Int64
t "2**32 - 1" # Int64
t "2**32" # Int64
t "2**63 - 1" # Int64
t "2**63" # UInt64
t "2**64 - 1" # UInt64
t "2**64" # Int128
t "2**127 - 1" # Int128
t "2**127" # UInt128
t "2**128 - 1" # UInt128
#t "2**128" # error
It is fully backwards-compatible and follows the pattern of inserting a UInt where possible: Alternative 2 😕(purely hypothetical, backwards-incompatible part shown in bold)
That was kind of an idealistic alternative because it makes the biggest sin in terms of backwards compatibility: the change in behavior isn't an error and is very implicit Alternative 3 ❤️(purely hypothetical, backwards-incompatible part shown in bold)
I think this is the state we should eventually be in. Alternative 4 🚀(purely hypothetical, compatible compromise option,
Vote, if you'd like. |
For the immediate first implementation of 128-bit integer literals, only 1 or 4 are possible. 2 and 3 would be breaking changes. We can still consider them as eventual goal (for 2.0 or later). I'm in favour of 4 and eventually probably 3. I think it makes sense to add type suffixes to all literals that are not the default type. For smaller types, this is already necessary because untyped literals get the default unless they don't fit in Int32. |
When I use a positive number literal that doesn't fit in a UInt64, should the error message be: As UInt64 literals without suffix should produce a warning in 1.4, it's probably discouraged to use them, but they are still the biggest supported integer without a suffix. |
@BlobCodes let's say |
So, #11571 is set to be merged soon, and it implements Alternative 4. Literals that used to produce |
I think we need to add a warning in 1.3 because the way has already been chosen. What reasons to wait 1.4? |
Tried implementing the warning, but the lexer doesn't have access to the program, and 0x8000_0000_0000_0000 # Warning: 9223372036854775808 doesn't fit in an Int64, try using the suffix u64 or i128 |
I think we should add the raw string to literals. Might even make sense to expose that in the macro API. |
Int128 support is good now, so I'm closing it. If there's something missing, we can open a specific issue. |
This issue has been mentioned on Crystal Forum. There might be relevant details there: |
I found in a project that while using Int128 it was not behaving as expected. In digging into this, I found this bug report #7915. This is something that I thought was listed as a TODO on the roadmap, but I don't see it there anymore.
I took over this project from someone else who abandoned it and quickly found that 32 bit support was a massive pain. I don't think it would be too hard to add this for 64bit machines only. I am currently working on a PR to add compiler-rt support for 32bit machines to support Int128. I think adding compile-rt could be done, but I want to make sure as a community, we will support 32bit machines.
Here are the PRs where people have started the Int128 work.
#5564
#7093
#8313
Is Crystal going to support 32bit machines?
I want Int128, but is this a feature that should be supported in Crystal?
The text was updated successfully, but these errors were encountered: