Lawyer Bait

The views expressed herein solely represent the author’s personal views and opinions and not of anyone else - person or organization.

Thursday, September 30, 2010

The New Mod Squad

Lee Technologies has released an interesting whitepaper this morning discussing modularization and introducing not only their approach but their solutions - at least high level. Steve Manos discussed his changing belief system in a blog post, which I thought was spot on.Change is in the air, as Steve points out, and I have always maintained change is fun to watch and even more fun to be a part of. That does not mean it is painless however - it seldom is. Change is a response to feedback - physical, mental, spiritual - that causes us to shift what we are doing and have 'always' done. 
Having been in the data center space for 20 years, I have seen and been part of a lot of change. I was an early blogger on containers and that shift in the industry toward more efficient data center operation, and invested a fair amount of time in sitting with the vendors to understand how what they made were similar and how they were different. The change they were introducing made sense. What they were saying and doing was logical. Like a lot of other things that make sense and are logical and increase awareness - for that awareness to reach the point of changing behavior takes a while, not to mention a few skinned knees, and other painful lessons. 
One major thing that was missing is their understanding of the entire ecosystem - not just their part of it - so they didn't understand that as great and efficient as they might be, unless you have a facility (or utility/systems) to hook them up to, you aren't done yet. You just sold a laptop without a battery or an iPhone without service, not the whole package that makes it all work.The data center industry, and the other parts of its ecosystem (power, network, water, facilities) are starting to change. Companies are deploying new solutions based on the logical assessments and operational experience of what they have done before (good and bad), and it is reinforced by carrot (cost savings) and stick (coal backed electricity) adding fuel to the building momentum. 
The change is slow however, and I want offer an answer to the question 'Why?'
The dot com boom fueled a massive boom in data center development starting in 1993-1994. Money was pouring in, and as more and more data centers were built around the house of cards (logic went out the window - you don't stay in business without actual profits) more money kept being thrown at projects that wouldn't be done for years, and then you still had to fill them up with customers. The investors to a large degree didn't understand the data center business as a whole, they knew it took money which they had and knew how to work with. They didn't understand what it took to sell and operate them. 
Those who jumped in without understanding the nature of the data center business aside from the real estate or money part, and followed the herd mentality got pinched. When they got pinched, they flinched hard, and abandoned projects and rather than owning their mistake (not understanding what they were investing in), they abandoned their projects and went on to express their opinions on why everyone else was foolish to be in that business. They had a chair when the music stopped. 
The sentiment of data centers as a business from the financial community soured. They were not such a hot ticket in 1999-2002 when the bubble popped. Now I am watching the data center business become a hot ticket again. Amazon, Ebay, Microsoft, Facebook, Twitter, and other internet companies, including 'cloud' companies started the pendulum swinging back the other way to make data centers the 'new black'. I will point out that these companies who started the changes, made the changes for themselves based on their requirments only, not the requirements of hundreds of customers with gear in the same data center. When you only have to satisfy one end user vs. hundreds, things are simpler.What has also changed is the influence from the data center ecosystem in data centers themselves. 
The tree hugging dirt worshippers in the 'Green' movement have flexed their PR muscle (Google 'Facebook and Greenpeace'), the US Government has mandated energy conservation and lower carbon footprints for their data centers and builders, architects, and owner operators have flocked to LEED designs for projects. In short, logic backed decisions influenced by awareness of what we can do differently as an industry have begun to change behavior. First in single tenant facilities, and now the behavior change is spilling over to multi tenant facilities. 
The behavior change is manifested in customers willingness to look at different solutions, so long as there is not a giant chasm to cross in being different is ok. It is up to us in the industry to continue some degree of missionary work in transferring knowledge about what we have learned to our customers and why it's important to them. We have to know the entire ecosystem, not just a piece of it. 
We also need to explain why things like watts per square foot and rents based on square feet don't matter as much today, even though they have been standard selection criteria for decades. We need to explain that basing a decision on which data center to lease should be based on things like how accessible will it be if a dirty bomb goes off and the city is in lockdown, or what will the impact be to our production environments if a jet drops on top of the Ashburn cluster.  
We are a part of a data center ecosystem - ultimately part of customers' ecosystems - and having a solid logical framework is the basis of what ultimately drives a right decision, not the greatest number of checkboxes checked on an RFP or how easy the facility is to reach from the airport for a tour.Modularization is a culmination of best practices, smarter execution, and practical application of components of that ecosystem. 
Modular solutions are the physical representations that show us that each component which is part of the system can be inegrated better - or in a different way - to satisfy the most requirements for all customers, not just one and deliver more carrots from the experience with sticks.
Container solutions provide flexibility of deployment in the event of a disaster, modular builds let you eat the IT infrastructure elephant in bites, and provide the broadest set of options to satisfy the greatest number of requirements - and the only one that matters - the customer... 
To be part of the Mod Squad - ping me - mmacauley AT bytegrid DOT net...

Wednesday, September 22, 2010

Are We Ready To Build a Data Center Around a Cabinet?

I decided to blog about this because a friend of mine pinged me about Elliptical Mobile Solutions who has been receiving a lot of press because they are part of a few finalists' designs for EBay's new data center in Phoenix.

I met Richard Topping and SharylLin Bancroft of Green Data Solutions by literally bumping into them at a data center event in Santa Clara in December last year. I was at CoreSite helping to develop the Cloud Community AKA Cloud Center of Excellence with NASA as a charter member. I was talking to a couple of colleagues and they bumped into me trying to slide between me and another group enjoying cocktails and data center banter.

Needless to say I 'got' what they were trying to do and their solution (the RASER) is compelling. The downside is that it changes the economics for the data center owner operator because the RASERS are high density and self contained. You don't need a chiller plant to run them, just power and a sturdy building with telecom. The issue is that data centers are built with chiller plants, raised floor, air handlers, air conditioners with humidity systems and are designed around traditional cabinets. So back to the question - are we ready to build a data center around the cabinet?

I personally believe that the answer is yes, with some major caveats:

- the owner operator must have access to power and a sturdy building WITHOUT tens of millions invested in 'traditional' cooling infrastructure, raised floor, etc.

- The owner operator must be able to spend CAPEX on the cabinets to the tune of 70-100K each and collect it back over time

The challenges facing the adoption of the RASER (and others like it) are based in these two issues. An owner operator knows that there has been demand for traditional raised floor space, and because there is a mature market for it, it makes a safer bet for the financiers backing the data center build outs and retrofits. Owner operators get their buildings & projects financed building based on watts/SF, ability to cool the floor, and the mechanical & electrical systems to insure uptime. So for an owner operator to walk away from this model is risky - they run the risk of stranding a lot of floor space and equipment by implementing the new RASER cabinets. All this in spite of the efficiency owner operators all claim they are chasing reflected in PUE chest beating.

Now for an owner operator who has a building with ample power and was planning on building out raised floor environments/computer rooms in a traditional sense but has not purchased the infrastructure to operate it, their luck could not be better. Here are the rough numbers:

On a 1,000 cabinet facility they will plan for standard density of 4.8 Kw/cabinet - 4800 Kw or 4.8 MW. At ~3k per Kw all in it costs ~$14.5M for the project. The rent they plan for all in is ~$2,000 per cabinet per month. All in, fully rented it throws of $2M/month. Filled with 36 month term tenants that is $72M/36 mo. term. If the PUE is 1.8 that means for every dollar they spend on electricity 80 cents is added for inefficiencies (overhead). Electricity costs are .8 * 4800 * 730 (hours in a month) = ~$2.8M which is a pass through to larger customers.

Using a RASER deployment of 1,000 cabinets equivalent the numbers look like:

4800 Kw/12 KW/cabinet = 400 cabinets @ $70K = $28M. The PUE on a RASER is 1.2 so the facility saves 60 cents per Kwh of electricity. On 4800 Kw, the lower PUE gives you .6 * 4800* 730 (hours in a month)= $2.1M/month. Over 36 months the savings are ~$72M.

So 14.5M + 100.8M in electricity - ~116M 3 year cost on a traditional facility vs 20M + 25.2M = $45.2M.

Another way to look at it is per sq ft. 4.8MW supporting ~50K ft. At $2500/ft to build or $1800/ft to retrofit for traditional space is ~$125M build or $90M retrofit. With RASERS you need a facility of 15K ft, and retrofit only you're at ~$27M.

So are we ready to build the data center around the cabinet yet?

Thursday, September 9, 2010

Data Center Modularization - The new New Thing?

Recently I have followed a number of discussions, articles, and presentations about modularization of the data center. First was virtualization and next seems to be modularization. Is this cloud all over again? Repainting existing technology approaches as new?

I saw a video that was at Data Center Knowledge this morning introducing us to Dell's new approach to 'modularization'. I commented on the video and wanted to expand on my thoughts here.

Dell talks about their solution - as do others - as being able to deploy compute and storage resources quickly. Plug and play if you will. Dell says they can deploy within 30 days at a 'ready site'. The issue is that you still need to have a site/building to put them in that is ready to go. By ready to go I mean water infrastructure, electricity, fiber/telco, and security in place. And it all has to be redundant. The other distinction that needs to be publicized is that the Dell solution is NOT a mass production offering. It is a design and deploy offering, not an off the shelf offering.

This to me is not modularization, it is layout-ization. You are controlling the layout of the data center floor and in other cases the data center itself. What this 'new' approach does not address is the more difficult tasks of finding sites that have the infrastructure available - power substations that are accessible with capacity, necessary water for cooling, no less than 5 telecom/carriers accessible, and risk profile lowering physical security - not in flood plains, not in flight paths, 40 miles from population/city centers, 18 miles away and not downwind from nuclear power plants, and the list goes on. Once these requirements have been met, then you need to look at permiting in localities, which is not an easy process. Then construction schedules, budgets, and finally breaking ground around the weather.

Most of the 'new' approaches are for single tenant facilities. This is an important distinction because with a single tenant facility, there is one set of policies, rules, and group of decision makers to satisfy. In essence you only need to satisfy the requirements for one customer - the internal one. You can argue that there are multiple customers, however at the end of the day, it is one company's budgets and one cpany's policies and needs that drive decisions.

In a multi tenant facility, you have potentially hundreds of customer's requirements and decision processes and policies to contend with making the challenges numerous and all over the map. Densities vary, what phase of power, AC or DC, which telcos are important, circuit management, cabinet hardware, cage or no cage, private suite, and cooling these various deployments. What has happened is that the multi tenant facilities design facilities that work in their business model and for the potentially greatest number of tenants and the greatest number of requirements. Customers either find what they want in the facilities or keep shopping until they do.

Layout-ization can help with this because they can buy or build facilities with the requisite infrastructure, finish it off the way customers want it and create the win-win. However, this assumes that the other pieces are in place (permits, sites, etc.). The other issue is that when a data center operator buys a facility, it is a 20 year asset and technology changes many many times in 20 years. Raised floor vs slab, 50 watts/ft vs. 200 watts/ft, blades vs. pizza box, gig E vs. 30 gig, Tier calssifications that are the 'must have' Tier classification. You get the picture.

So what is the solution?

I think that the data center operators who 'get it' will adopt new deployment models for their facilities themselves and the layouts inside them. Watts per square foot don't matter as much after the short list is made by customers, only whether or not an operator can deliver to the customer requirements now and in the future. Tier classifications are being restructured, and existing space gets harder and harder to come by every week. This means that choices for customers will be made by what is available, vs. what is right for the customer.

There is a small window of opportunity here for investors to create the next model with data center operators who saw what has happened over the past 20 years and know it needs to be different for the next 20, and are willing to team up and build it.