Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Modern hardware could be of course amazing at being fast, but nobody put in the effort into software designed for that goal.

A lot of effort has being thrown that way, it's just that their definition of "go fast" is "having lots of throughput", not the things you are looking for: goodput, low latency, low jitter.

We got faster mainframes instead of faster minicomputers - computers and network systems that are optimized at doing batch jobs.

We can submit a whole bunch of blocks and the graphics processing unit can display accelerated smooth video for us. Or we can push a whole neural network to a tensor processing unit and have it do inference in very few operations, after the model is loaded. But both of those operations while having smooth output have horrible startup latency.

I think is very naive to call what the devices have today as a "single computer" when in fact, for a long while they're several interconnected computer components joined with lots of buffer.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: