Follow

@hobson @hackernews@die-partei.social most people expect to deal with same numbers they learnt about at school and for algebra to work the same way. That's what singed int represents, the whole numbers you learnt about at school, with a precondition that you stay away from large numbers. For the same reason javascript relies on IEEE floating point, python and common lisp go for arbitrary precision etc.

Teach people finite fields in school and suddenly unsigned will make a lot of sense to everyone, and will be the obvious safe default, but as long as you are teaching high arcane "real" numbers as if it's the most natural thing, most people will not have a single clue what's wrong with (a+b)/2 and what's the point of whatever the hell that midpoint implementation is (which is explained here and most everywhere rather poorly for the same reason), and they will write less wrong code more often if they stick with int, or better double, or better yet bignum.

Aside from not making silly mistakes when you barely put a thought into the arithmetic and algebra you are dealing with, cause you are conditioned to assume that it's easy-peasy elementary school stuff that you shouldn't think too hard about, the strongest argument in C++ for using ints in contexts where you do not need to represent negative quantities is performance. As an int conceptually represents a number that is not allowed to overflow (which you are expected to guarantee trough logical invariants), the compiler has more freedom to optimize it as an abstract quantity, for example to use a machine register that is larger than the representable range requires, which will not wrap said range on overflow. In this sense the language is lacking unsigned types with undefined overflow, and for completeness - singed types with defined overflow.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.