same reaction, and the author not using ddrescue just makes this a tale about not following any sort of documentation when installing their distribution. There's really nothing in there that anyone should take away besides making sure they didn't hack up fstab and remove tmpfs.
Having a "real" /tmp is not an uncommon preferences amongst sysadmins who've been at this for a while.
Too many things that can wedge a system only leave evidence of why/how they did that in /tmp so still having it around post-reboot can be a huge aid to root cause analysis.
It's a non-trivial trade-off but calling deliberately choosing that trade-off "hacking up fstab" doesn't strike me as a remotely fair description thereof.
Generally, things that write tempfiles while processing data in /tmp and manage not to log enough because they wedge the system before it occurs to the code that anything's gone wrong enough to log.
Yes, absolutely, "bad software, no cookie," but the usual culprit is some sort of vendor binary where the poor sod running the system has no control over that.
BSD systems generally clean out an on-disk /tmp during the normal boot process, yes. There are ways around this, but when I've been responsible for babysitting craptastic vendorware it's always been on Linux or Solaris.
Personally I've (after quite some grumbling about it) accepted /tmp being on tmpfs and just live with it; my current source of crankiness is "people who don't configure their systems to write to syslog" since if the box gets wedged by an I/O storm systemd will shoot systemd-journald in the head and then journald sometimes deletes all of your previous logs as it starts up.
One example that springs to mind was a vendor antivirus system that unpacked email attachments into /tmp - generally when it died on its arse the only way to figure out why was to dig into /tmp and look at what it had left, then try and infer backwards to the culprit email from there.
Yes, the problem isn't disk usage, the problem is that if journald's writes get too slow a watchdog timeout will cause systemd to assume it's crashed and shoot it in the head, which leaves the journal part written, which means on restart the new journald process throws away the old journal as corrupt.
(this may have been fixed in the last couple of years, but it leaves me somewhat untrusting of it in terms of actually being able to read my logs)