Lawyer Bait

The views expressed herein solely represent the author’s personal views and opinions and not of anyone else - person or organization.

Tuesday, February 3, 2009

Interesting Article on the Electricity and Efficiency of a Data Center

Having just met with the utility mentioned in this article last week, I thought it was an interesting read because it points out a major disconnect between greenness/efficiency and SLA's which drive the data center business.

I think there is a market for this kind of approach, however it is the segment of the market that does not require an SLA of any kind or has live-live replicated environments.

Published: Tuesday, Feb. 03, 2009 | Page 1B
A data center under construction at McClellan Park has won the nation's highest green-building rating for its groundbreaking, energy-saving, low-carbon-emitting "air-side economizers."

Engineers call it "free cooling" technology.

Anyone else would call it opening the windows.

"Outside air is an absolute must now," said Bob Sesse, chief architect at Advanced Data Centers of San Francisco, developer of the McClellan project. "It never made any sense not to open the window."

The windows in this case are 15-foot-high metal louvers that run the 300-foot length of the building, a former military radar repair shop. Inside, a parallel bank of fans will pull outside air through the filter-backed louvers, directing it down aisles of toasty computer servers.

The company figures it can get by without electric chilling as much as 75 percent of the year, thanks in part to Delta breezes that bring nighttime relief to the otherwise sizzling Sacramento region.

Data centers are among the biggest energy hogs on the grid. These well-secured facilities are the "disaster recovery" backups for banks, insurers, credit card companies, government agencies, the health care industry and other businesses that risk severe disruption when computer systems fail.

They house rows of high-powered computer servers, storage devices and network equipment – all humming and spewing heat around the clock, every day of the year.

Large "server farms" rival a jumbo-jet hangar in size and an oil refinery in power use. California has hundreds of them, including RagingWire Enterprise Solutions and Herakles Data in the Sacramento region.

Almost all are designed so that only a small amount of outside air enters, lest dust invade the high-tech equipment's inner circuitry and short out the works. Indoor air chillers run with abandon to make absolutely sure the vital processors don't overheat and crash.

Now, with energy prices rising, bottom lines dropping and computer equipment becoming more powerful – with hotter exhaust to prove it – data center developers realize the perfect computing environment is the enemy of the good.

"It's a return-on-investment question," said Clifton Lemon, vice president of marketing for Rumsey Engineers, an Oakland firm that specializes in green-building design.

"People are beginning to realize they can build data centers with the same performance, reliability and safety and save lots of money on electricity."

Manufacturers of data-center equipment are fast meeting the energy challenge, said KC Mares, a Bay Area energy-efficiency consultant.

"Just in the last several months, we are seeing a new generation of servers giving us dramatically increased performance while consuming no more power than the previous generation," Mares said.

The green factor also looms larger in the marketing of data centers.

"We have recently replaced our air-cooled chiller system with a more efficient water-cooled system and doubled our cooling capacity," Herakles touts on its Web site.

Not to be outdone, RagingWire boasts on its own site of a recent savings of 250,000 kilowatt-hours a month – enough to power about 1,700 homes in the Sacramento area – by improving its chilled water plant and cooling efficiency.

But data centers continue to run computer-room air conditioners during many hours in which the outside air is cool enough to do the job, according to engineers who research high-tech buildings for the federal Energy Department's Lawrence Berkeley National Laboratory.

"We wondered, 'Why are people doing this?' and what we found out is that the data industry had grown up this way back from the days of mainframes," said William Tschudi, principal researcher in the lab's high-tech energy unit.

The tape drives and paper punch cards in the giant mainframe computers of the 1960s and 70s were more sensitive to dirt than today's equipment. But while computing hardware grew more resilient to the elements, manufacturers held firm to their recommended ranges of operations on temperature, humidity and air quality.

"They had this myth built up that you had to have a closed system," Tschudi said.

Yet modern servers actually could handle a much broader range of environmental conditions, with protective coatings on circuit boards and hermetically sealed hard drives.

Microsoft Corp. broke a mental barrier last year when it tested a rack of servers for seven months in a tent outside one of its data centers in the rainy Seattle area. The equipment ran without fail, even when water dripped on the rack.

Similarly, recent experiments by the Berkeley lab engineers found that suspended particles in data centers drawing outside air for cooling were well within manufacturers' recommended ranges.

The "free cooling" system accounts for most of the 30 percent energy savings that Advanced Data Centers expects to gain over conventional data centers. The balance would come from efficiencies in electric fan systems, air chillers, lighting and battery backups. The project's first phase – a 70,000-square-foot center – is scheduled for completion this fall.

Last year, the company built a 45-megawatt substation to support long-term growth of up to 500,000 square feet across four buildings. As an incentive to follow through on its energy-saving plans, the Sacramento Municipal Utility District gave it a break on electricity rates, a discount worth $80,000 a year, said Mike Moreno, a key account manager with the utility.

Another bonus: SMUD, together with Pacific Gas and Electric Co. and California's other major utilities, recently gave Advanced Data a $150,000 "savings by design" award.

On top of that, the U.S. Green Building Council awarded the company its highest rating – platinum – under its Leadership in Energy and Environmental Design program.

If the center operates up to expectations, it will be the most energy-efficient data center known, Tschudi said.

"I think it's achievable."

Why High Density isn't Dense Enough

I have been involved in four discussions in as many days about High Density Hosting. When I ask - what does that mean to you? I usually get a number that is north of 125 watts a square foot up to 200 watts a square foot.

That is one way to look at it, but even at 200 watts a square foot, if your loading up a multi core box with dozens of VM's you will be headed north of that. Typically what happens is that hosting companies balk at the space required to cool 200 watts plus. Looking at power density is about power NOT space.

So in one conversation the company said that they wanted to deploy 10 - 30Kw cabinets. Total draw of 300Kw.If I throw in a PUE of 1.5 I am at 450 Kw for draw. If I have a 10,000 square foot pod that doesn't have a half a megawatt available in either current or generator power, it can be the best space on earth but you can't power it. No power = no CPU cycles, and no company.

Buy power, not space. Think about when the lights go out - you aren't thinking about what a nice neighborhood you live in or what a good investment it was, or how sweet the location is. You just want to nuke some popcorn and watch the Red Sox beat the Yankees.

Monday, February 2, 2009

Stirring the Pod (Container) a Little...

I just read a piece put out by the folks over at Tier 1 Research, part of the 451 Group. They point to a gap in understanding of a growing space - containers and their application and importance in the business of data centers.

Their biggest issues are with Scalability and Maintainability (See below). I don't get it. A snippet says:

What happens when a server reaches catastrophic failure? Do the containers themselves get treated as a computing pool? If so, at what point does the container itself get replaced due to hardware failures? There are just so many unknowns right now with containers that T1R is a bit skeptical

Interesting questions, however what happens when a server reaches catastrophic failure in a data center? Why is it different?

Do containers themselves get treated as a computing pool? - Duh. If you have a container in ~500 sq feet that is the equivalent of 4,000 square feet based on CPU density alone, I would think a pool of CPU cycles is a safe assumption.

Replacing containers are easier than replacing data centers. You can't ship an entire self contained data center and just plug it in. Determining the point of failure in a relatively new segment of the business seems a bit presumptive that these will fail more often than a data center would. Why would they fail more? They are servers in a very dense, very efficient controlled environment. My money is on they fail LESS not more.

Scalability is a non issue. They scale to over a dozen petabytes of storage per container. You can even stack them and save horizontal floor space, or in Dell's case to separate environmentals from compute.

For maintenance - there are some issues, yet do not point out a single one Having vetted most of the containers in the market - those from Sun, Microsoft, HP, Dell, Rackable, and Verari - by speaking with the heads of all the product groups, their sales people who talk to customers, and the facilities/install folks I will tell you that the analysis that was done was PURELY SPECULATIVE and not based on fact or experiential knowledge. If there is an Achilles heel on containers - the ones that run water through them will have maintenance issues. Stuff grows in water. Verari uses a refrigerant based solution that is a step in a thought out direction. But as for maintenance, if you can open a door, have serviced equipment in a traditional data center, and have worked on a server before you can handle a container.

'There are too many unknowns right now'? What?

Tier 1, you are a research firm - do the research. There are few unknowns in my book right now, other than, when do the containers and the equipment they contain get End of Life'ed. If I (a VP of a Data Center Company) can call up Dell, Microsoft, Verari, Rackable, and Sun, get an audience, delve into the technology with them, talk to end customers, and actually figure out the good the bad and the ugly with containers then surely you can too. And I can assure you that you also stand to benefit more than I do from containers. It is a new research area for which people pay you money. I do this for fun, and my research is done on my own time. The benefit to me is credibility and pure exploration into an area that is so compelling from an efficiency, economical, and common sense perspective that there are FEW unknowns at this point. Assuming you know the data center business. I know enough to be dangerous and know enough to know what is good for my business and my customers.

Further on in the piece Economics and credit markets are discussed. Containers are almost entirely OpEx - when leased, depreciated over 3-5 years or whenever the useful life of the servers is over for the application they serve. In other words, you don't need access to CapEx nor do you need to spend CapEx right now on these. Lease vs. own - it's a smart decision right now since containers are boxes of storage and CPU cycles and little else.

My 2009 Data Center trends will be covered in depth in an upcoming post, and I will tell you that Containers are one of the trends. They won't replace data centers any time soon, but they will present a very green and very compelling option for us all to look into.


T1R has an overall positive outlook for the datacenter space, although due to the current tight debt market and the subsequent slowing of datacenter builds, there are persisting questions regarding datacenter trends for 2009 as well as thoughts on creative ways datacenters can increase their supply and/or footprint without expending huge amounts of capex (new datacenter builds can cost upwards of $32.5m for 25,000 square feet). As far as major datacenter trends for '09 go, T1R believes the three major trends will continue to be modularization (not to be confused with containerization), IT outsourcing and that datacenter providers with significant cash flow from operations, such as Equinix, will expand their market share as cash-light competitors are forced to slow expansion. As far as creative means for datacenters to increase their supply and footprint, T1R believes that there will be continuing M&A activity in '09 as smaller datacenters will continue to suffer from the lack of available capital and larger and regional datacenters (many of which are now owned by equity firms) look to increase their supply as utilization creeps up.


Although modular datacenters are really in their infancy, T1R believes that the trend toward using modular datacenters will continue to gain momentum over the course of 2009. Currently, there are a few approaches to this concept including Turbine Air Systems' modular cooling system, where the system itself is pre-built and requires little to no on-site 'design and build' work (and time); and APC's pod-based rack systems, where the cooling and power is at the row level and can be installed as datacenter computing needs grow.

The idea of a modular datacenter is that it takes the current infrastructure pieces (cooling, switchgear, etc.) that have been traditionally custom built for each facility and makes it more assembly-line and mass-produced. The obvious benefits to this are standardization and ease/speed of installation (and presumably fewer issues after a design is proven and tested). A current disadvantage to this approach is maintenance. For example, maintaining many row-based cooling systems would be more complex to perform and track than the current traditional method (since currently cooling is more or less room-based and not row-based). The size of the equipment being serviced is decreased but the number of pieces of equipment is increased. In most cases, this means that total maintenance time is increased, as the size of the equipment isn't proportional to maintenance time. T1R believes that as the modular datacenter design matures, however, the maintenance issue will become less of a factor, especially when compared with its installation advantages.


While T1R believes that modularization has potential significant advantages, we are currently not bullish on containerization. Containerization, literally, is building an entire datacenter inside a commercial shipping container. This includes the servers, the cooling, the power, etc. in one container. (Dell has changed this slightly with its double-decker solution where it uses a second container to house the cooling and power infrastructure separately from the server container. Some containers contain no power or cooling at all, but rather are designed to be a bit more efficient due to enhanced airflow and are dependent on an external source for power. T1R believes that these economics make even less sense.)

With a containerized approach, the datacenter gets shipped and plugged in and it's up and running, more or less. In T1R's opinion, the main issues with containers are scalability and maintainability. A container is a very enclosed space and, in a lot of cases, not very 'engineer-friendly.' What happens when a server reaches catastrophic failure? Do the containers themselves get treated as a computing pool? If so, at what point does the container itself get replaced due to hardware failures? There are just so many unknowns right now with containers that T1R is a bit skeptical. In addition, containers contain approximately 2,000-3,000 servers in one package and are sold in a huge chunk, so they tend to favor those that would benefit the most from such a product – namely, server manufacturers. Given the aforementioned unknowns and scaling and maintenance issues, T1R does not believe container datacenters to be solutions for the majority of datacenter customers.

Expansion via distressed assets?

With the credit crunch going strong, how will datacenters continue to increase supply and footprints? If history is to be our guide, the logical conclusion here is 'Hey, let’s go buy up all those distressed assets at fire sale prices from failing industries à la 2001.' Sounds like a great plan; however, the current situation is significantly different from the high-tech meltdown that allowed companies like Switch and Data, 365 Main, Equinix, Digital Realty Trust and CRG West, among others, to acquire large datacenter assets from telcos relatively cheap.

In 2001, datacenter builds by telcos were based largely on speculative revenues and business models, therefore parting with those assets in order to recoup some of the capex spend in view of what the telcos perceived to be a failing industry, was a no brainer. However, those were newly built, and for the most part, high-quality assets.

So what's on the market today? Well, potentially there are datacenter assets from financial and insurance institutions. The difference now, as opposed to then, is that these current enterprise assets are much lower quality. Certainly upgrades can be made in terms of things like power density, but the real problem is size – they're simply not large enough to be worthwhile for most major datacenter providers to be interested. And, let's remember, even for the few higher-quality assets that may be out there, turnaround time to remove old equipment and upgrade is approximately 12 months, resulting in no meaningful supply add until at least 2010. T1R does believe, however, that these limited, higher quality assets will be acquired in 2009.

Expansion via M&A

How are the datacenter providers increasing their supply and footprints, then? Aside from small expansions such as the one by DataChambers mentioned on January 29, T1R believes that for 2009 this will largely be accomplished through M&A activity, a trend that has already begun to emerge over the course of the last year.

In the second half of 2008, several M&A deals of this nature closed, including Digital Realty Trust acquiring datacenter property in NJ and Manchester, UK; Seaport Capital acquiring American Internet Services (a San Diego colocation provider), which in turn acquired Complex Drive (another San Diego colocation provider) a month later and Managed Data Holdings' acquisition of Stargate (a datacenter outside of Chicago) to continue its datacenter rollup strategy for expansion. In January of 2009, the first M&A deal of the year was announced when Center 7 (a portfolio company of Canopy Ventures, an early-stage venture capital firm targeting information technology companies in the western US) acquired Tier 4, a datacenter provider in Utah.

Going forward, T1R expects similar deals with smaller datacenter providers, like Tier 4, to investigate being acquired to enable access to capital for expansion. We also think private equity as well as a few of the top datacenter providers are potential acquirers in 2009. For more details on M&A activity in the datacenter space, please refer to T1R's Internet Infrastructure: Mergers and Acquisitions Second Half Update report.