2011 was an interesting year for the container data center space. There were several new entrants (modular), a maturing in the industry of where they fit into the grand scheme of data centers and an evolution by the existing manufacturers into the next generations of their initial offerings.
I will list the vendors that I know about and have met with and personally verified that their offering is in fact the real deal. This is a list and not a ranking by the way. If you want my personal professional opinion on this then track me down off the record:
CirraScale (f/k/a/ Verari)
Elliptical Mobile Solutions (EMS)
I am sure I will get calls from the vendors who will read the post and make 'corrections' to my positioning and commentary but I am vendor NEUTRAL and I have met with, seen, and in some cases sold one of these so my experiences and comments are based on seeing, smelling, and experiencing the products and the people who work for the companies who make them. Everyone has their own secret sauce, I get it. I also get that I am one of the few people on the planet who has not worked for a vendor who has any longevity in covering and paying attention to the space, so my comments are solely mine, as are my opinions.
The overall maturity in the space has gone from containers, to a true modular design, with the biggest difference being that modular units are the needed hybrid between the container and the traditional stick built data center. Essentially they are a data center built in a factory, vs. a data center retrofitted on site.
The bigger issue in adoption of this class of data center solution, I believe, is that companies have no single organization to turn to for evaluating if their needs are better served by a container, a modular design, or a traditional data center. The skillsets needed to evaluate the solutions require IT, facilities, real estate, electrical systems, cooling, site prep/construction, permitting, and finance skills. Since many large customers have people with these skillsets it is arguably easier to tap into the knowledge base, however politics can derail things quickly. That and an IBM container salesperson does not get paid to tout the HP Ecopod's features and benefits so it is vendor lock from the get go and that scares the sh*t out of most CTO/CIO's. The don't run just ___________ gear.
Let's get real for a moment - if the vendors wanted to eat their own dog food (or wear their own outfits), they would. HP would never build another data center, nor would IBM, I/O, or anyone else with the offering of the physical and operational components under one roof. They would be consolidating in droves out of their facilities into containers and that would would be a clear and consistent path forward. It would also be the smartest thing financially they could do.
The only issue is that none of their larger customers runs just that vendor's gear to support their business so it's a religious sell to get someone to convert, or you're taking an oval peg and jamming it in a round hole.
I have spent a fair amount of time looking hard at modular solutions - very different from a container- since it is close to a traditional data center by accepting 19 or 24 inch racks of ANY manufacturers hardware that you would put in a stick built facility, the biggest difference is that the data center is built in a factory so there is nothing to tour ahead of time in most cases. For some people this is a deal breaker, for others it doesn't matter since the absolute and measurable consistency of product, cost, and financial aspects trump the inability to walk through something.
As for the financial aspects of all of these NSB (non-stick-built) solutions, a modular data center is TOUGH to beat. In one recent study I saw the cost of a retrofit was ~$22M per megawatt. This is 4x the capital cost of the same power footprint of a modular solution. 400% and that is not a typo. On the OpEx (power/cooling/operation) side the modular designs deliver a PUE of 1.2 consistently. Knowing what I know about air flow and pimping the data center floor with different efficiency gaining tricks of the trade, I would think you get lower. Contrast that with a 2.0-3.0 PUE in traditional and older facilities you can cut your costs in half.
One of the most compelling examples I have thought about was based on a 20 megawatt proposed project. That's 20,000 Kw - plus a greenfield build cost of ~$2,000/foot. That's likely a 180,000 square foot size building (conservatively). So using back of the napkin math, that's roughly a $360 million dollar project that has to be COMMITTED to up front. Yes it would be built and financed in phases, but who wants a $360M obligation on their financial statement today. Especially when 20MW of a modular solution can be delivered for roughly $160M. No - not a typo. That is $200M less off the top and out of the gate. Operationally it's 14.2M in electricity (in North Carolina - a marketed cheap power state) for a 2.0 PUE facility vs. $8.5M/year for a 1.2 PUE modular. Thats $6M/year less. So dollars in over a 10 year period is $500M vs. $245M. I am no mathemetician but that's a big difference.
If you would like to discuss in any detail - ping me. I have sold a container to NASA, consulted on RFI and RFP documents for intelligence and military applications, and am truly vendor neutral. I am running out of reasons to believe that the modular solutions are anything BUT the way to deliver data centers. Technically, financially, and environmentally. I love this evolution.
Others are taking note as referenced by a study Digital Realty Trust paid for to take the temperature of the data center growth patterns. The interesting nugget was that 41% of companies surveyed were looking at containers and/or modular solutions. Smart move.