Well that is pretty provocative :-) Bryan might be surprised to learn that for its first 15 years of its existence NetApp filers were Unikernels in production. And they out performed NFS servers hosted on OSes quite handily throughout that entire time :-).
The trick though is they did only one thing (network attached storage) and they did it very well. That same technique works well for a variety of network protocols (DNS, SMTP, Etc.). But you can do that badly too. We had an orientation session at NetApp for new employees which helped them understand the difference between a computer and an appliance, the latter had a computer inside of it but wasn't progammable.
When NetApp started publishing about how their filers worked, I saw it as a great demonstration of the power of computer science.
Their patents kept direct competitors away, but the core ideas of the write-ahead filesystem had a big impact on an industry that was going ahead anyway and implementing these in kernels, userspace, etc.
Then came the SSD and they had no brilliant IDE of how to make more of that...
Had they had a more mainstream platform they could have caught up with "code moving" instead of "data moving"? because you could run established software like Hadoop. If your core OS is Linux, Windows or something mainstream you can get all kinds of software but if you have to port it to something codesigned in 1993 you probablt won't to do
At risk of speaking for Bryan, I think the difference between a NetApp and a unikernel-in-a-hypervisor is the sharedness of it. Without taking a position on Bryan's article (it's always entertaining to read his thoughts), I think his point is that that advantages of a unikernel are largely washed away, and the disadvantages are emphasized.
While Bryan is somewhat bombastic (more fun to read), there's a lot of smart in this article, I think.
Some of the original "unikernel in a shared environment" research and development was done on IBM's VM system in the 70's. Their motivation was that there were many customers who used a dedicated application on what was then a mainframe to process their data (this would be analogous to their unikernel) and they wanted to consolidate their hardware, so they got a bigger mainframe (like an IBM 370 at the time) and they would run all of these dedicated applications in their own logical partition (LPAR). It brought three huge benefits to the table (error containment, hardware consolidation, and backward compatibility). Because IBM had experienced the same effect that we're seeing today, their new mainframes in the 70's were so much more powerful than the ones from the 50's and 60's, and for them what was worse the ones from the 50's and 60's running their dedicated apps were still fine. So they developed a way to make computing more cost effective for their customer and at the same time opened the market further for their mainframes.
Today, a typical dual core 8GB x86 machine can, as a dedicated machine run a lot of things. At the same time the evolution of open systems have brought "continuous configuration integration" into the mainstream, all major OSs from OS X to Windows to Linux have weekly, sometimes daily, reconfiguration events.
And while the number of the changes in aggregate are high, the number of changes on particular subsystems are low. Unikernels answer the need of creating a stable enough snapshot of the world to allow for better configuration management. Look at the example of the FreeBSD system taken down after 20 years. Some services can just run and run and run.
Image isolation is a thing, and you can only be as good as your underlying software and hardware can make you, but it can also be a big boost to operational efficiency if that can simplify your security auditing and maintenance.
So my take on Bryan's article was that he came at the argument from one direction, which is fine, but to be more through it would help to look at it from several directions. What was worse was that he made some assertions (like never being in production) before defining precisely what he means by a Unikernel which leaves him wide open to examples like NetApp and IBM's VM system to counter his assertion.
The nice thing about computers these days is that many of the problems we experience have been experienced in different ways and solved in different ways already, and we can learn from those. The Unikernel discussion is not complete with looking through the history of machines which are dedicated appliances (from Routers, to Filers, to Switches, to Security camera archivers)
Like most things, I don't think unikernels are a panacea but they also aren't the end of the world and have been applied in the past with great success.
The trick though is they did only one thing (network attached storage) and they did it very well. That same technique works well for a variety of network protocols (DNS, SMTP, Etc.). But you can do that badly too. We had an orientation session at NetApp for new employees which helped them understand the difference between a computer and an appliance, the latter had a computer inside of it but wasn't progammable.