Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why should I as the programmer have to do different things just because the CPU has to do different things? If the logic of what I want to do is the same in multiple cases, then I only want to write it once and let the compiler figure out what to do each time I call it. (It's the whole reason I write and call functions in the first place, instead of tediously manually inlining the corresponding code at each instance!)


But in this case, it isn't. Two's complement signed fixed-size integers are completely different from their unsigned brethren, and confusing them is an endless source of bugs.


> But in this case, it isn't. Two's complement signed fixed-size integers are completely different from their unsigned brethren, and confusing them is an endless source of bugs.

They both support a meaningful left-bitshift operation, which is what the author wanted to abstract over.


But they do not support a common right-bitshift operation, nor sign extend. Which is what both the compiler and the poster above you are trying to make clear.


Interface providing generic left-shift doesn't have to provide right-shift at the same time. And after the shift, what the compiler is complaining about is not the arithmetics but only type construction.


how would the compiler know what numbers to expect at runtime?


Either by specializing the generic code to the specific types at compile time, or by using dynamic dispatch. (Note that this is the same tradeoff you get if you write the non-generic code by hand, so generics don't create a tradeoff where there wasn't one already.)


Because you're writing code for a computer and sometimes it's expected that you actually care about what the computer is doing with your code.


Again, would you say the same thing about registers versus stack allocation (or heap allocation)?

"The CPU handles it differently" isn't by itself a legitimate argument against abstraction.


This code is converting a bunch of bytes into an integer by bit shifting. So it's specific to the binary representation of ints in your language, and how types of various sizes and signedness interact.

Note that this doesn't depend on your CPU, but rather on how integers and their byte representations are specified in the language.

I think it's quite reasonable to expect the programmer to know what that means assuming they are writing bitfiddling code like this.


None of that argues against abstraction. It's perfectly reasonable to want to write code that uses left bitshift and works on unsigned and signed 8, 16, 32, and 64 bit integers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: