That so people in this thread argue about the higher-level language (missing the point) shows that few people found access to the underlying machine code.
Which is sad, because all code is still executed as machine instructions even when the developer does not see it or does not want to care.
We're 20 years past the point where you can expect everyone in tech to trace every instruction down to machine instructions. Higher level languages abstract away the need for it, and for the most part, we can rely on the authors of those languages to make many of the decisions that impact performance. The rest of us learn about these details on posts like this. It's not a sad fact, in fact it's probably one of the most useful things that's happened in tech. It frees people to think about other problems.
I barely touch assembly in my day to day work, but I do understand on a fairly deep level how a computer works, which I feel is extremely important and very frequently influences how I write high-level code.
Certainly one can bang out code their entire career without ever having a clue how machine code works, but I really wouldn't advise it. At worst it leads to total ignorance, and at best you accumulate a disconnected set of "best practices" as inscrutable lore handed down from on high.
When I did web dev it was more important to know how the WEB BROWSER works, not how machine code works
it's not useful to know how the browser allocates memory or garbage collects it, it's more useful to know that it has web sockets. Don't get the idea that things you know are relevant to other people.
You go as deep as you need to. My day job involves writing tons of Javascript and there is zero need for me to dive into the underlying processes until something goes wrong. So far the lowest I’ve ever needed to go was the node source.
It only stays relevant if the JavaScript is too slow. In the case of removing two elements from an array of length 8, it's not too slow to remove them one by one since, again, it runs in microseconds
If you're not raining about how your code translates to logic gates, you're not a competent programmer. /s
Abstractions are good, and while we should be careful not to completely trust a brand new abstraction, high level languages are established enough that we don't need to worry about it.
Not sure if you were around 20 years ago, but you sure couldn't expect everyone in tech to understand machine instructions as well as the layers above, all the way up to corporate politics.
If this were the primary reason, it seems odd that some of the earliest popular languages used 1-based indexing in an era with much slower CPUs (<= 1 MIPS) and less available memory to store instruction code. If the authors of FORTRAN were comfortable with 1-indexing on the IBM 704 (40 KIPS) in 1957, it couldn't have been prohibitive. At the same time, as computer use cases became more interactive there may have been a greater focus on performance, but it likely that the semantics of the high-level language were at least as important.
“machine code” isn't really the zero point that's special nowadays.
The code the programmer wrote is compiled to something such as LLVM IR, LLVM IR is further compiled by LLVM to Assembly, this is further compiled by an assembler into machine code, and then the c.p.u. further compiles this to it's internal code as it executes it. “machine code” really is no more special in this chain of events than, say, LLVM IR.
... then the c.p.u. further compiles this to it's internal code as it executes it. “machine code” really is no more special in this chain of events than, say, LLVM IR.
I think this is the core point. Even though you are mechanically feeding machine code into the CPU, modern CPUs deeply transform your program before (or even while) running it, with potentially deep performance implications. With modern CPUs, the only choice we truly have is which compiler we trust to optimize our program.
It's slightly different - machine code is special because it's a visible API between our programming work and the CPU sillicon. Microcode is private implementation detail which you usually can't affect as a software developer. Machine code is something you are directly creating by writing things though.
Except if the CPU is microcoded, those instructions might not mean what you think they do, and you need something like Intel's V-Tune to actually understand what the CPU is doing with them.
It has, and if you manage to know everything on those pages, for every single processor on the market, while taking into consideration motherboard architectures, and OS workloads you're our hero.
We are way beyond MS-DOS days and Michael Abrash's books.
It was just most basic example. Indices are connected with math all the time. Consider implementing circular queues with 1-based arrays. Any arguments about what’s _possible_ miss the point. Arguments about what _feels_ natural purport to know the human mind which I find generally suspect. Mostly I think of computers as tools and am interested in getting the most functionality out as smoothly as possible not as slaves whose job it is to make my life worry free.
Which is sad, because all code is still executed as machine instructions even when the developer does not see it or does not want to care.