Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You have no idea how modern virtualisation works. Go read about hardware assisted virtualisation on x86/x86-64.

Most server operators don't care about performance. They have performance coming out of their ears. They care about redundancy and maintenance, or to put another way cost centres.

Your post is on the wrong side of history. Virtualisation is being rolled out in a massive scale right now. Essentially you can abstract your entire physical infrastructure away from your logical infrastructure.

You have a physical server die? The HV has already moved the image to a new node and started it. Before you even receive the e-mail notification the new server is already booting.

So now a hardware failure goes from being a massive panic, to being a small annoyance. You pull the dead hardware from the rack, and plug a new generic node in and that now becomes available for the HV to use.

You want to back up a server? Take a copy of the ENTIRE image in one go. You want to deploy a template? Well that's trivial with images. You want to do change management with the servers? Just put the images in GIT. Boom done.

What you're suggesting is essentially taking the cheapest bits of server management (i.e. the physical hardware) and acting like they're the most expensive bits (i.e. people, time, and flexibility).



You know nothing about me.)

Automated server management has been done long before virtualization stacks emerge, and it about utilizing monitoring and network boot.

What virtualization stack Google uses on its servers? None.


Google doesn't need virtualisation because Google is already deploying a large number of identical nodes which can be easily pulled and replace (i.e. Google's infrastructure is identical to virtualisation, but without the need for it).

Most businesses don't have hundreds of identical servers and services. They have a few dozen very specific or niche ones which need high up-time. This is one area where virtualisation can play a great role.

Another example is someone like a host where they want to distribute resources without any human intervention (e.g. Virtual Private Servers, shared hosting, etc).

All in all you're now starting to see data centres turn into "dumb" hardware farms, with the logical design and deployment being handled up-stream. This even extends to things like networking (routes, switches, etc - all centrally controlled).


So, unless we are selling CPU hours, like owners of a mainframe or a hosting provider, we don't need any virtualization?

Is it possible that the inefficiency and added unnecessary complexity makes a virtualized servers unusable for the most common server's tasks, due to I/O interference and cache/memory access complications?)


I never said anything about selling CPU hours. It has nothing to do with that. In fact my example about organisations directly contradict that.

It has to do with organisation and about being able to abstract logical servers away from physical hardware.

There is very little inefficiency (see hardware assisted virtualisation point above) and very little overt complexity (go play with any modern HyerVisor solution).

As I said above you don't understand how virtualisation works. Your points about "I/O interference" and "cache/memory access complications" just make you sound ignorant.


Well, I'm really ignorant, when it comes to meaningless sentences like organisation and about being able to abstract logical servers away from physical hardware. You're probably right.

On the other hand, I'd been involved in a few projects, which includes optimization of a big centralized databases, so, I think, I know a bit about flows of data, access patterns and where the bottlenecks are (hint: around serializing and scheduling low-level I/O operations).

Try to look under the surface structure, which plain words are.)


You talk about edge cases. And you actually have yet to provide any data to back up your statement when it comes to penalty because of running it in a virtual environment.


Google might not run the vast majority of their services on VMs, but they do use virtualization (Xen AFAIK).

They even developed a cluster management tool for Xen/KVM: http://code.google.com/p/ganeti/


>Go read about hardware assisted virtualisation on x86/x86-64.

To further on this excellent and valid point, very cheap virtualisation was available on a lot of other very solid and very productive architectures and has been for decades; it was just x86 finally catching-up late. There are a whole lot of reasons speaking FOR virtualisation and only a few very specific applications where it might be a bad idea. I have no idea what OP up there is all about and against it, they make no sense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: