Physicalization Vs Virtualization

January 25, 2013

Physicalization, a rather ill-chosen name coined in 2009 that refers to the practice of placing multiple physical machines in a rack unit, is starting to become a viable alternative in some server scenarios. Although according to Moore’s Law, increasing integration exponentially reduces costs, some jobs that require lots of I/O bandwidth can be made less expensive to provide using many less integrated processors. In this way physicalization can actually be a way to reduce hardware costs, since in some cases server processors cost more per core than energy efficient laptop processors, which may make up for added cost of board level integration.

A great deal of I/O bound applications and services stand to to benefit from such physicalized environments, simply because each operating system instance is running on a processor that has its own network interface card, host bus and I/O sub-system, unlike in the case of a multi-core servers where a single I/O sub-system is shared between all the cores. This approach works especially well with IO-intensive workloads due to the fact that all those physical servers have their own IO, and don’t have to contend with each other, as it happens in virtualized environments.

Rebalancing the available performance in an I/O direction to be more suitable for loads like hosting and database apps is one of the advantages derived from physicalization, but there are several others, such as lower costs by using consumer components, lower amount of high-density servers exhausting power and cooling budgets in a small fraction of available rack space or allowing datacenters to pack more independent servers into every rack unit, making it possible to offer dedicated hardware to clients with web hosting needs and other similar industry requirements, in a more affordable way.

Physicalization

Physicalization works out specially well for large Internet companies such as Facebook and Google, because of the huge scale on which they run simple workloads. Once they had realized that, the next logical step was to have the hardware tailor-built to their specific needs.

Frank Frankovsky, a top hardware engineer at Facebook, explained it this way:

“There is an impedance mismatch between the speed at which software moves and the speed at which the configuration of hardware can move. Traditionally, we have been designing servers that have been pretty monolithic. Everything is bound to a PCB, wrapped in a set of sheet metal, that doesn’t really allow us good flexibility in the way you match the software to the set of hardware that is going to be applied to it. You shouldn’t have to change the whole system just to do a processor, memory, or I/O upgrade.”

As Vice President of Hardware Design and Supply Chain Operations at Facebook Frank Frankovsky is at the heart of Facebook’s Open Compute Project Foundation as its Chairman and President. The Foundation’s newest design consists of a modularized server that lets you add or remove its processor anytime without affecting the rest of the hardware. This is revolutionary in the sense that up to now, if you wanted a new processor, you needed to get practically a complete new server.

“By modularizing the design, you can rip and place the bits that need to be upgraded, but you can leave the stuff that’s still good. Plus, you can better match your hardware to the software that it’s going to run.” Frankovsky explains.

Not only that, Frankovksy and his team are sharing their innovations with the whole world. For free. Their Mission Statement goes as follows:

The Open Compute Project Foundation is a rapidly growing community of engineers around the world whose mission is to design and enable the delivery of the most efficient server, storage and data center hardware designs for scalable computing. We believe that openly sharing ideas, specifications and other intellectual property is the key to maximizing innovation and reducing operational complexity in the scalable computing space. The Open Compute Project Foundation provides a structure in which individuals and organizations can share their intellectual property with Open Compute Projects.

Sounds too good to be true? It certainly does, but even if there is a hidden commercial interest we cannot yet fathom, the Open Compute Project seems to be headed in the right direction and guided by the right principles. A rarity in today’s computing world.