Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think, in theory, it would work regardless of the starting address. As long as you don't try to access the invalid address (which you wouldn't assuming that it's starting in the index 1, you would always be accessing the first valid address)


In theory, it’s not guaranteed to work at all. https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2310.pdf#p... (emphasis added):

“In other words, if the expression P points to the i-th element of an array object, the expressions (P)+N (equivalently, N+(P)) and (P)-N (where N has the value n) point to, respectively, the i + n-th and i − n-th elements of the array object, _provided_they_exist”

[…]

If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; _otherwise,_the_behavior_is_undefined.

In this case, the minus-one-th element doesn’t exist, so the expression

  int * b = a - 1;
triggers undefined behavior.

I think some compilers use this in practice to produce faster code (that, often, will not do what the programmer expects it to do). Start reading at https://stackoverflow.com/questions/56360316/c-standard-rega... if you’re sure they don’t. I expect that will change your opinion.


That is a description of the commonly agreed upon definition of the C abstract machine and language semantics. You could simply define the language another way with regards to this behaviour.

Not that I would want to do it, I think zero-based addressing is not very taxing for the convenience of being closer to how we think of memory addressing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: