Physicalization Vs Virtualization

Physicalization, a rather ill-chosen name coined in 2009 that refers to the practice of placing multiple physical machines in a rack unit, is starting to become a viable alternative in some server scenarios. Although according to Moore’s Law, increasing integration exponentially reduces costs, some jobs that require lots of I/O bandwidth can be made less expensive to provide using many less integrated processors. In this way physicalization can actually be a way to reduce hardware costs, since in some cases server processors cost more per core than energy efficient laptop processors, which may make up for added cost of board level integration.

A great deal of I/O bound applications and services stand to to benefit from such physicalized environments, simply because each operating system instance is running on a processor that has its own network interface card, host bus and I/O sub-system, unlike in the case of a multi-core servers where a single I/O sub-system is shared between all the cores. This approach works especially well with IO-intensive workloads due to the fact that all those physical servers have their own IO, and don’t have to contend with each other, as it happens in virtualized environments.

Rebalancing the available performance in an I/O direction to be more suitable for loads like hosting and database apps is one of the advantages derived from physicalization, but there are several others, such as lower costs by using consumer components, lower amount of high-density servers exhausting power and cooling budgets in a small fraction of available rack space or allowing datacenters to pack more independent servers into every rack unit, making it possible to offer dedicated hardware to clients with web hosting needs and other similar industry requirements, in a more affordable way.


Physicalization works out specially well for large Internet companies such as Facebook and Google, because of the huge scale on which they run simple workloads. Once they had realized that, the next logical step was to have the hardware tailor-built to their specific needs.

Frank Frankovsky, a top hardware engineer at Facebook, explained it this way:

“There is an impedance mismatch between the speed at which software moves and the speed at which the configuration of hardware can move. Traditionally, we have been designing servers that have been pretty monolithic. Everything is bound to a PCB, wrapped in a set of sheet metal, that doesn’t really allow us good flexibility in the way you match the software to the set of hardware that is going to be applied to it. You shouldn’t have to change the whole system just to do a processor, memory, or I/O upgrade.”

As Vice President of Hardware Design and Supply Chain Operations at Facebook Frank Frankovsky is at the heart of Facebook’s Open Compute Project Foundation as its Chairman and President. The Foundation’s newest design consists of a modularized server that lets you add or remove its processor anytime without affecting the rest of the hardware. This is revolutionary in the sense that up to now, if you wanted a new processor, you needed to get practically a complete new server.

“By modularizing the design, you can rip and place the bits that need to be upgraded, but you can leave the stuff that’s still good. Plus, you can better match your hardware to the software that it’s going to run.” Frankovsky explains.

Not only that, Frankovksy and his team are sharing their innovations with the whole world. For free. Their Mission Statement goes as follows:

The Open Compute Project Foundation is a rapidly growing community of engineers around the world whose mission is to design and enable the delivery of the most efficient server, storage and data center hardware designs for scalable computing. We believe that openly sharing ideas, specifications and other intellectual property is the key to maximizing innovation and reducing operational complexity in the scalable computing space. The Open Compute Project Foundation provides a structure in which individuals and organizations can share their intellectual property with Open Compute Projects.

Sounds too good to be true? It certainly does, but even if there is a hidden commercial interest we cannot yet fathom, the Open Compute Project seems to be headed in the right direction and guided by the right principles. A rarity in today’s computing world.

This entry was posted in Hardware, Software and tagged , , .
Bookmark the permalink.

Related Posts:

  • Virtualization for Small BusinessesNovember 26, 2010 Virtualization for Small Businesses What at first appeared to be a technology that was meant to be deployed only in large corporations, is starting to take hold in the realm […]
  • Server UpgradesJuly 26, 2014 Server Upgrades We are pleased to announce that RackNine has completed the following server upgrades: Migrated from 2x Quad Core to 4x Quad Core Xeon […]
  • Save Power, Conserve Battery and Electricity in PCs using GranolaMay 28, 2010 Save Power, Conserve Battery and Electricity in PCs using Granola In this age, when there is growing concern over saving battery power in laptops and saving electricity in desktops, Granola comes as a […]
  • PCI Express 3.0 Base specification announcedNovember 23, 2010 PCI Express 3.0 Base specification announced PCI Express 3.0 Base specification revision 3.0 has finally seen the light. The specification, that had been delayed on several occasions, […]
  • The Rise of Linux in Embedded SystemsNovember 20, 2013 The Rise of Linux in Embedded Systems Cross Linux With an Arduino – What Will You Build? (via Dice News in Tech) You’ve undoubtedly witnessed the tremendous […]
  • The End of the BIOS AgeOctober 13, 2010 The End of the BIOS Age The BIOS (Basic Input/Output System) should disappear from all newly manufactured computers starting in 2011, to be replaced by a new […]

One thought on “Physicalization Vs Virtualization

  1. Pingback: Odkaz

Leave a Reply