I once, many years ago, wrote something titled "Type Integer Considered Harmful". (This was way back during the 16-32 bit transition). My position was that the user should declare integer types with ranges (as in Pascal and Ada), and it was the compiler's job to insure that intermediate variables must not overflow unless the user-defined ranges would also be violated. Overflowing a user range would be an error. The goal was to get the same answer on all platforms regardless of the underlying architecture.
The main implication is that expressions with more than one operator tend to need larger intermediate temporary variables. (For the following examples, assume all the variables are the same integer type.) For "a = b * c", the expression "b * c" is limited by the size of "a", so you don't need a larger intermediate. But "a = (b * c)/d" requires an temporary big enough to handle "b * c", which may be bigger than "a". Compilers could impose some limit on how big an intermediate they supported.
This hides the underlying machine architecture and makes arithmetic behave consistently. Either you get the right answer numerically, or you get an overflow exception.
Because integer ranges weren't in C/C++, this approach didn't get much traction. Dealing with word size differences became less of an issue when the 24-bit, 36-bit, 48-bit and 60-bit machines died off in favor of the 32/64 bit standard. So this never became necessary. It's still a good way to think about integer arithmetic.
Especially in higher-level languages, I've wished language designers would move toward using variable-size/bignum integers instead of fixed size integers (Python does this, for example). It eliminates overflow, and the need to analyze each int to see if it will overflow the type you're sticking it into.
I wouldn't mind being able to have a "RangedInt<min, max>" type either, in addition. If the bounds are tight enough, the compiler could just use the next-bigger machine integer type (and do bounds-checking, please!). I think a integral type that was always modulo the max would be useful in many applications as well (i.e., unsigned, and overflow is well-defined to wrap, but you explicitly opt-in to this behavior.) You can imagine,
int: signed, bounded only by memory
ranged_int<min, max>: integer type capable of holding anything in [min, max].
Over/underflow is an error (exception? panic? [1])
modulo_int<min, max>: unsigned, overflow wraps.
(Mathematicians probably have a better name here… "ring"?)
"usize" or "size_t": capable of holding any memory address, so useful for indexes.
native::uint8, native::uint16, etc: whatever your hardware gives you, if you really need it.
The default type a new coder would grab for (int) won't overflow on them, although there are questions about what does some_array[int_index] do, esp. if it overflows the index type.
If you want wrapping, use the "mod" or "%" operator. The compiler should be made to understand how to generate fast code for idioms such as "n = (n+1)%65536;"
I'd like to have your version of Int but with a twist: Let it be a compilable language and let me have a compiler option to disable all the checks for release version. Having all the bound checks in place will hurt your performance way too much for performance sensitive code - and chances are, if you care about ints vs. floats instead of using a language with a general 'number' type, you do care about performance.
The main implication is that expressions with more than one operator tend to need larger intermediate temporary variables. (For the following examples, assume all the variables are the same integer type.) For "a = b * c", the expression "b * c" is limited by the size of "a", so you don't need a larger intermediate. But "a = (b * c)/d" requires an temporary big enough to handle "b * c", which may be bigger than "a". Compilers could impose some limit on how big an intermediate they supported.
This hides the underlying machine architecture and makes arithmetic behave consistently. Either you get the right answer numerically, or you get an overflow exception.
Because integer ranges weren't in C/C++, this approach didn't get much traction. Dealing with word size differences became less of an issue when the 24-bit, 36-bit, 48-bit and 60-bit machines died off in favor of the 32/64 bit standard. So this never became necessary. It's still a good way to think about integer arithmetic.