A harddisk seek + reading 1 MB sequentially is something like 30 times slower than SSD and 120 times slower than reading from RAM. Disk seeks are what really kills you, as a disk seek is 20 times slower than a roundtrip within the same datacenter.
Sending 1 MB of data over a 1Gbps network using a disk seek and a sequential read is 3.6 times slower than doing the same with SSD. For big files with a server that has to serve many concurrent requests you'll end up doing several disk seeks for the same file. And we've been talking about the ideal too, because data on disks can get fragmented and so reading from the same file does not guarantee a sequential read.
This is why Varnish does caching, even when serving static files. If you want high performance with low latency, then Varnish works better than something like Nginx.
As an example, right now I'm working on integrating with an OpenRTB ads exchange platform. My server has to respond within 100ms total roundtrip (with an upper bound of 200ms, but they complain if you consistently respond in over 100ms). It's enough to say that everything to be served has to be already in memory, as doing anything else means that window cannot be met (consider how a single disk seek can cost something like 10ms).