You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Unfortunately, it's relatively sophisticated problem, which balances between several facts:
Strong typed languages use extremely complex rules of integer type coercion, and in some cases these rules differ in different languages. For example, what's the result of adding signed 8-bit int and unsigned 16-bit integer? What it could be assigned to and thus converted explicitly/implicity?
Some language features are available strictly for certain integer types. For example, in Java one can't use 64-bit integers as array indexes.
KS's design decision on this was to ignore that complex set of rules altogether and just use "one size fits most" integer type for every particular language, where we're enduring any integer calculations. It's what's called CalcIntType in the code, and given that we want all types to be determined in precompile time (i.e. we don't want it to act differently for every target language), so it maps to what's considered to be "most common" and "most usable" for every language.
Probably we could at least somehow alleviate this problem by using "largest possible" integer type instead of "one size fits most one". It would raise some other problems, though...
From #1197, original by @crackedmind:
The text was updated successfully, but these errors were encountered: