Unpopular opinion: i64, int64_t, Int64 and similar types should be named according to their actual meaning, Ring64.

Even better, all programming languages should have a Ring[N] type that provides unit, zero addition and multiplication over a domain of N-bit strings, with the compiler applying proper optimizations when available (and requested).

@Shamar except in C the result of signed overflow is undefined. So it's not a ring.

@newt

It IS a ring on most implementations, but there were good flexibility (and portability) reasons behind all behaviours left undefined in C.

Yet this is a good objection... for standard C.

@Shamar no, it's not a ring. And you shouldn't treat it like a ring. Wanna know why? Here's why.

They are both right, btw.

@newt

Never trust non-GPL code... 🤣

You are technically right, but i think they SHOULD be rings (and named accordingly). So that programmers would learn from the beginning to live with the clear semantics of rings, always aware of the risk of overflow.

@Shamar I'm not technically right. I'm just right. And no, both signed and unsigned integer overflow should throw an exception. Rust does this right since one of the recent versions.

If you want rings, make them as separate types with different semantics.

Also also, if you deliberately write code assuming signed overflow is well defined, I'll personally find you and kick you in the nuts :comfyevil:

@newt

Uhm... no. You are not "right".

You are technically right because you narrowed the discourse on standard C. Then, speaking of one specific language (standard C), you are right that int64_t is not a ring.
(And I do the worst possible things with computers... things you can't even imagine... so start looking if you dare to know... 😉).

But I'd argue that the Rust choice is... arguable.

I mean, it's a choice, legitimate for its authors. BUT calling the same time Ring64 AND giving it ring semantics would be an equivalent design choice.

The point: is it better as a bade type in a language, a limited integer that throw an exception in some conditions, or a ring that consistently works as a ring and let people build on top of its clear and predictable semantics?

I'd say the second.

It's a simpler solution, faster to learn and predictable.

(It's worth noticing that I'm wondering about an hypothetical language that is as simple and consistent as possible)

@Shamar you know that you can have both, right? This is not a choice. If you want your rings, go have your rings. Just don't call them ints.

@newt

Uhm.. I'm arguing that calling a bounded type as "int" is a design error.

Take Wirth's Oberon: it uses INTEGER. At compile time you can specify the actual size.

Yet, it's NOT really an integer because it's bounded.

So if it's not a integer why calling it so?

@Shamar it is an integer. Ask any programmer. It's not an integer in the strict mathematical sense, but then again, nobody is arguing that it is.

Monads in Haskell aren't Monads in the categorical meaning of this word, but so far the only people who are pissed off by this are a bunch of nitpicking nerds.
Follow

@newt

I'm a programmer. 😉

It's not an integer.
It's NOT a theoretical argument, indeed overflows happen.

There are three possible solutions to this: force people to rationalize that "it's not an integer, but... who cares?", making it an actual integer (whatever it costs computationally... but you know.. halting problem) or to name such in another way.

The first increase cognitive load for no reason: it's accidental complexity or, if you prefer, a global technical debt.

Another example: division should not be defined on types that include zero. You shouldn't be able to construct a type for which division is defined, with a zero.

Now in a world with JavaScript and C++, we know programmers can rationalize basically anything.

But still... I think it's worth to imagine better worlds.

@Shamar the problem is, in the vast majority of cases integer overflow is an undesired behaviour. I only can name a handful of cases where integer overflow is ok.

@newt

Exactly.
So why should we adopt such self-deception by calling something that is not an integer "integer" instead of building something that actually work as an integer?

Early optimization?

Or maybe we think that programmers cannot properly handle rings (as they cannot handle overflow errors)?

I'd argue that such belief is a little "paternalist" and well... wrong.

So we know that people cannot properly handle such corner case... why not make their management mandatory from the start?

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.