Lawyer Bait

The views expressed herein solely represent the author’s personal views and opinions and not of anyone else - person or organization.

Monday, February 2, 2009

Stirring the Pod (Container) a Little...

I just read a piece put out by the folks over at Tier 1 Research, part of the 451 Group. They point to a gap in understanding of a growing space - containers and their application and importance in the business of data centers.

Their biggest issues are with Scalability and Maintainability (See below). I don't get it. A snippet says:

What happens when a server reaches catastrophic failure? Do the containers themselves get treated as a computing pool? If so, at what point does the container itself get replaced due to hardware failures? There are just so many unknowns right now with containers that T1R is a bit skeptical

Interesting questions, however what happens when a server reaches catastrophic failure in a data center? Why is it different?

Do containers themselves get treated as a computing pool? - Duh. If you have a container in ~500 sq feet that is the equivalent of 4,000 square feet based on CPU density alone, I would think a pool of CPU cycles is a safe assumption.

Replacing containers are easier than replacing data centers. You can't ship an entire self contained data center and just plug it in. Determining the point of failure in a relatively new segment of the business seems a bit presumptive that these will fail more often than a data center would. Why would they fail more? They are servers in a very dense, very efficient controlled environment. My money is on they fail LESS not more.

Scalability is a non issue. They scale to over a dozen petabytes of storage per container. You can even stack them and save horizontal floor space, or in Dell's case to separate environmentals from compute.

For maintenance - there are some issues, yet do not point out a single one Having vetted most of the containers in the market - those from Sun, Microsoft, HP, Dell, Rackable, and Verari - by speaking with the heads of all the product groups, their sales people who talk to customers, and the facilities/install folks I will tell you that the analysis that was done was PURELY SPECULATIVE and not based on fact or experiential knowledge. If there is an Achilles heel on containers - the ones that run water through them will have maintenance issues. Stuff grows in water. Verari uses a refrigerant based solution that is a step in a thought out direction. But as for maintenance, if you can open a door, have serviced equipment in a traditional data center, and have worked on a server before you can handle a container.

'There are too many unknowns right now'? What?

Tier 1, you are a research firm - do the research. There are few unknowns in my book right now, other than, when do the containers and the equipment they contain get End of Life'ed. If I (a VP of a Data Center Company) can call up Dell, Microsoft, Verari, Rackable, and Sun, get an audience, delve into the technology with them, talk to end customers, and actually figure out the good the bad and the ugly with containers then surely you can too. And I can assure you that you also stand to benefit more than I do from containers. It is a new research area for which people pay you money. I do this for fun, and my research is done on my own time. The benefit to me is credibility and pure exploration into an area that is so compelling from an efficiency, economical, and common sense perspective that there are FEW unknowns at this point. Assuming you know the data center business. I know enough to be dangerous and know enough to know what is good for my business and my customers.

Further on in the piece Economics and credit markets are discussed. Containers are almost entirely OpEx - when leased, depreciated over 3-5 years or whenever the useful life of the servers is over for the application they serve. In other words, you don't need access to CapEx nor do you need to spend CapEx right now on these. Lease vs. own - it's a smart decision right now since containers are boxes of storage and CPU cycles and little else.

My 2009 Data Center trends will be covered in depth in an upcoming post, and I will tell you that Containers are one of the trends. They won't replace data centers any time soon, but they will present a very green and very compelling option for us all to look into.

mark.macauley@gmail.com


T1R INSIGHT: 2009 DATACENTER GROWTH TRENDS AND MODULARIZATION VS CONTAINERIZATION by Jason Schafer, Aleetalyn Schenesky

T1R has an overall positive outlook for the datacenter space, although due to the current tight debt market and the subsequent slowing of datacenter builds, there are persisting questions regarding datacenter trends for 2009 as well as thoughts on creative ways datacenters can increase their supply and/or footprint without expending huge amounts of capex (new datacenter builds can cost upwards of $32.5m for 25,000 square feet). As far as major datacenter trends for '09 go, T1R believes the three major trends will continue to be modularization (not to be confused with containerization), IT outsourcing and that datacenter providers with significant cash flow from operations, such as Equinix, will expand their market share as cash-light competitors are forced to slow expansion. As far as creative means for datacenters to increase their supply and footprint, T1R believes that there will be continuing M&A activity in '09 as smaller datacenters will continue to suffer from the lack of available capital and larger and regional datacenters (many of which are now owned by equity firms) look to increase their supply as utilization creeps up.


Modularization

Although modular datacenters are really in their infancy, T1R believes that the trend toward using modular datacenters will continue to gain momentum over the course of 2009. Currently, there are a few approaches to this concept including Turbine Air Systems' modular cooling system, where the system itself is pre-built and requires little to no on-site 'design and build' work (and time); and APC's pod-based rack systems, where the cooling and power is at the row level and can be installed as datacenter computing needs grow.

The idea of a modular datacenter is that it takes the current infrastructure pieces (cooling, switchgear, etc.) that have been traditionally custom built for each facility and makes it more assembly-line and mass-produced. The obvious benefits to this are standardization and ease/speed of installation (and presumably fewer issues after a design is proven and tested). A current disadvantage to this approach is maintenance. For example, maintaining many row-based cooling systems would be more complex to perform and track than the current traditional method (since currently cooling is more or less room-based and not row-based). The size of the equipment being serviced is decreased but the number of pieces of equipment is increased. In most cases, this means that total maintenance time is increased, as the size of the equipment isn't proportional to maintenance time. T1R believes that as the modular datacenter design matures, however, the maintenance issue will become less of a factor, especially when compared with its installation advantages.


Containerization

While T1R believes that modularization has potential significant advantages, we are currently not bullish on containerization. Containerization, literally, is building an entire datacenter inside a commercial shipping container. This includes the servers, the cooling, the power, etc. in one container. (Dell has changed this slightly with its double-decker solution where it uses a second container to house the cooling and power infrastructure separately from the server container. Some containers contain no power or cooling at all, but rather are designed to be a bit more efficient due to enhanced airflow and are dependent on an external source for power. T1R believes that these economics make even less sense.)

With a containerized approach, the datacenter gets shipped and plugged in and it's up and running, more or less. In T1R's opinion, the main issues with containers are scalability and maintainability. A container is a very enclosed space and, in a lot of cases, not very 'engineer-friendly.' What happens when a server reaches catastrophic failure? Do the containers themselves get treated as a computing pool? If so, at what point does the container itself get replaced due to hardware failures? There are just so many unknowns right now with containers that T1R is a bit skeptical. In addition, containers contain approximately 2,000-3,000 servers in one package and are sold in a huge chunk, so they tend to favor those that would benefit the most from such a product – namely, server manufacturers. Given the aforementioned unknowns and scaling and maintenance issues, T1R does not believe container datacenters to be solutions for the majority of datacenter customers.


Expansion via distressed assets?

With the credit crunch going strong, how will datacenters continue to increase supply and footprints? If history is to be our guide, the logical conclusion here is 'Hey, let’s go buy up all those distressed assets at fire sale prices from failing industries à la 2001.' Sounds like a great plan; however, the current situation is significantly different from the high-tech meltdown that allowed companies like Switch and Data, 365 Main, Equinix, Digital Realty Trust and CRG West, among others, to acquire large datacenter assets from telcos relatively cheap.

In 2001, datacenter builds by telcos were based largely on speculative revenues and business models, therefore parting with those assets in order to recoup some of the capex spend in view of what the telcos perceived to be a failing industry, was a no brainer. However, those were newly built, and for the most part, high-quality assets.

So what's on the market today? Well, potentially there are datacenter assets from financial and insurance institutions. The difference now, as opposed to then, is that these current enterprise assets are much lower quality. Certainly upgrades can be made in terms of things like power density, but the real problem is size – they're simply not large enough to be worthwhile for most major datacenter providers to be interested. And, let's remember, even for the few higher-quality assets that may be out there, turnaround time to remove old equipment and upgrade is approximately 12 months, resulting in no meaningful supply add until at least 2010. T1R does believe, however, that these limited, higher quality assets will be acquired in 2009.


Expansion via M&A

How are the datacenter providers increasing their supply and footprints, then? Aside from small expansions such as the one by DataChambers mentioned on January 29, T1R believes that for 2009 this will largely be accomplished through M&A activity, a trend that has already begun to emerge over the course of the last year.

In the second half of 2008, several M&A deals of this nature closed, including Digital Realty Trust acquiring datacenter property in NJ and Manchester, UK; Seaport Capital acquiring American Internet Services (a San Diego colocation provider), which in turn acquired Complex Drive (another San Diego colocation provider) a month later and Managed Data Holdings' acquisition of Stargate (a datacenter outside of Chicago) to continue its datacenter rollup strategy for expansion. In January of 2009, the first M&A deal of the year was announced when Center 7 (a portfolio company of Canopy Ventures, an early-stage venture capital firm targeting information technology companies in the western US) acquired Tier 4, a datacenter provider in Utah.

Going forward, T1R expects similar deals with smaller datacenter providers, like Tier 4, to investigate being acquired to enable access to capital for expansion. We also think private equity as well as a few of the top datacenter providers are potential acquirers in 2009. For more details on M&A activity in the datacenter space, please refer to T1R's Internet Infrastructure: Mergers and Acquisitions Second Half Update report.

1 comment:

Tell Us What You Think!