Lawyer Bait

The views expressed herein solely represent the author’s personal views and opinions and not of anyone else - person or organization.

Monday, December 20, 2010

Site Selection - A Case Study?

I received a call from a friend of mine earlier today who called to ask if I had seen the data center requirement posted for the CIA. I had heard about it and got to thinking about what they would do to kick off the site selection process. Then I realized that if the requirements were posted, they must have done a lot of homework already. You would think anyway. So I thought I would brainstorm here on my blog and take you through my thought process. and then see how much of what I think about are in the posted requirements.

 I will qualify this blog post with a disclaimer - the only specs I know are that they want a 200,000 square foot facility built out in 40,000 square foot chunks/phases. I have not read any document or article related to it.

So when I look at the requirement as I understand it - my high level criteria would be:

1. Available inexpensive power, ideally with a green power source that is off grid
2. Available network connections to Government TIC (Trusted Internet Connection) sites
3. Proximity to US military bases to insure that staff can get to a facility if needed
4. Risk profile for natural disasters, man made disasters (civil unrest/planes into buildings, etc), financial condition of location States, geologic topography, and political risk.

So for #1, availability of cheap power and preference to a green power source that is off grid is in the top slot for a reason. Data centers number one expense is power, and data centers are typically operated for 15+ years. Virtualization, while reducing floor space actually increases density and draw of power for more powerful servers, and the power needs to be 'green' per the mandate by Vivek Kundra, the CIO for the United States.

To my knowledge there are two sites that COULD satisfy this requirement today - but it would take a signed contract to mobilze the funds and people to construct the power systems, and one would get knocked out of the running because of proximity to DC proper. A box of anthrax, or a suitcase dirty bomb with nuclear waste within 40 miles would make it 'inoperable' at least on the surface. So this isn't a data center requirment, it's a power plant with one customer - a data center.

On to #2, which deals with network connectivity, and not just in the general sense but specific to a TIC site. There are 100 of them in the US, so that limits things too if that is a dealbreaker - and it should be. Data needs to flow to the facility and out of the facility to provide credible intelligence to our Government and to other Governments friendly with the United States. Since we arent talking DSL pipes, these need to be 100GB pipes or better. Redundant too. This will be expensive since there is not a lot of fiber in the boonies - I know, I live in 'the boonies' (kind of).

Number 3 is important because in the event of some really bad shit going down on a major scale, people need to get in and out of the facility no matter what. The ability to use runways and other infrastructure specific to logistics is crucial. People can fly to a base and get choppered in, HUM-V'ed in or some combination of planes and automobiles. Sorry trains. There is also the 'able to sleep at night' piece having jets and Blackhawks able to scramble and be airborne in seconds to sanitize any threat if needed.

Number 4 should be a given, and arguably #1. When I think about Ashburn VA and the amount of data that is captured, processed and stored at the end of a runway is breathtaking oversight in my opinion. Knowing I can get mobile network reception on the approach to Dulles means that people bent on harming the United States and its citizens can do major harm sitting in Verizon's parking lot and pressing send. Katrina got everyone's attention with natural disasters on a major scale, but what about wildfires that close roads, burn telephone poles, and melt insulation around copper lines? Ice storms that make roads impassable and cause tree branches to cut power and telecommunication lines or the earthquake that hits and while the seismically engineered building hardly feels anything, the 60 miles of conduit housing telecom fiber gets severed by a bridge collapsing or ground shaking separation of the conduit itself? Topography needs to be factored in as well for redundant microwave links, sensors for all sorts of data needing to be captured, analyzed and used in making educated decisions?

I added a vector that has not been too much of an issue to date but one I think about - the financial condition of a State. I will use California as the example - the State is teetering on bankruptcy if you believe the mainstreet media outlets. The issue won't be whether or not the State can afford to keep the power plants operating, but the civil unrest that occurs when people get incredibly pissed off. Mobs like to burn things, flip over cars, and do other things that make no sense to me. Looting happens. If there is no water or electricity all kinds of crazy things can happen. Guess what? Data centers plan to have water and electricity no matter what, making them a target.

The point in all of this, is that before you even start touring facilities, virtualizing, seeing who is out there, and putting together requirements based on square feet and phases, you better have done your homework, or you - CIA data center - will be the next disaster to recover from.

Tuesday, December 7, 2010

Pause for new thought stream

I was just reading the tweets from the Gartner DC event in Las Vegas and had a random thought to solicit feedback on:

Is the term managed storage really just data management with a new coat of paint?

Discuss...

Monday, December 6, 2010

Site Selection - Part Two of a Multipart Series - Modular = What changes?

First off thanks to all of you who reached out and sent comments on the first part of the series about how Site Selection is often overlooked. Some of the feedback inspired me to stretch this into a multipart series, and so now I will delve into another favorite topic I began writing about in April 2009  - modular and containerized options and  for the purposes of this entry, how they factor into a site selection.

Modular data centers from companies like Lee Technologies, in the US and BladeRoom based in the UK, are coming into their own and being deployed at a pretty good clip. I think data center people and investors alike are starting to get it as to why these options are compelling:

1. Cost. I can deploy a facility in a quarter for half the cost. That means that I get what I want whether it's density, footprint, speed of capacity, or operational efficiency for a lot less than a traditional data center Suite. As an owner operator it means I don't overbuild, I don't pay for a 100,000 square foot facility to be built and then wait until ten 10,000 sqaure foot rooms are sold. Cha-Ching all around

2. Speed. I can get them deployed quickly - usually in less time than it takes my broker to fly in a team to look at several facilities in several markets and tell me 'Stop me when you see something you like'. Thousands of cores, petabytes of space in 3 months. Me likey.

3. Consistency of product. Can anyone point me to a data center company that has the exact same layout, generator brand, UPS equipment, switchgear, masonry, or even layout in their facilities? Me either. Imagine the money and time saved supporting the same make and model of data center and its components? Modular solutions are the Southwest Airlines of the data center business. Southwest only flies Boeing 737's. Why? Because any pilot and flight crew can work on any plane in the fleet. Makes sense for data centers as well.

4. Simplicity. If I can get the same data center solution in multiple places from a single vendor with a single contract, with terms and conditions the same why wouldn't I make that my first choice? I have experienced first hand getting a phone call from a customer whose main reason for calling me was because we had a contract with them and they needed space fast and didnt have 6 weeks to work with a competitor to hammer out a different contract.

5. Logistics - How do you find an insured and bonded mover? There are millions of dollars invested in the cargo so you want to choose wisely. You will also need to pay attention to overpass/underpass heights since your solution will likely arrive by truck. You will also want to have an address, since many deliveries will be handled by out of state/non-local firms so giving directions to old barns or favorite fishing holes won't likely cut it.

6.Medium voltage electricians: Container/modular solutions by in large require more medium voltage electricians than low voltage so having local contractors with those skill sets are a factor. (Chicago has thousands, some remote areas may have 2). Also you will want to factor in the Unions and whether or not you are in a right to work or union state.
So all of this is great, but how does site selection tie into all of this?

Well modular solutions change the model for data center companies and companies looking to go modular.

For the data center company, it trips them up. Why? If you just paid $100M for a big 100,000 square foot building with no land and 5 mw of power, you could take a container or modular solution and gobble up that power footprint inside your building in a heartbeat, and still have 90,000 square feet of expensive building you can't do anything with until you spend millions more to get more power and have to wait 18 months to get it.  Or turn it into a raquetball court for employees I suppose. It impacts site selection for owner operators because they don't have to find big buildings to turn into data centers or build big buildings to chop up into computer rooms.

It also opens the door to a smarter model. One where they can buy land, do inexpensive improvements and pour concrete pads while they are working with the client to finish off the design of their facility. The data center is built, tested and shipped to the site where it is assembled in a couple of days and commisioned and ready for gear. The biggest issue is that there is no vendor that has emerged as a modular centric data center operator. X/O is the closest in supporting containers, but that's all I know of and the solution is far from complete. Today's owner operators will need to amortize their real estate and free up cash to make the switch and augment what they are doing, but that is a seismic shift in thinking, operations, and development, that I think a modular centric company would do better.

For end customers/tenants the site selection has been an issue for containers and modular solutions as well. Do we have land? Is it zoned? Is there power? Is it reliable? Is it in flight paths? Could a Waste Management dumpster truck get confused and pull our container away? How much is a generator? How many do we need? Can we even put diesel on site in a tank? Who will design the modular solution? What's a good UPS? Who understands power distribution and building codes? Where do we plug it in? Does it have a plug?

These are questions I have fielded or helped answer that past two years. many are funny today, but all legit.

Many companies who build and run their own facilities do have the people to figure it out, and Lee Technologies  and BladeRoom both have designers to help get it right. The issue is still - what makes a good site? Send an email to me at bytegrid@gmail.com and I will send you my site selection guide which lays out all the questions you'll want to ask no matter which way you go.

Special thanks to Steve Manos at Lee Technologies for his contribution in this blog post. Steve can be reached at smanos@leetechnologies.com

Monday, November 22, 2010

Humor for a short week...

http://www.projectcartoon.com/cartoon/2

This is funny because a lot of it is very true. It is the blind leading the naked through an acre of rosebushes...

Tuesday, November 16, 2010

Site Slection and How Overlooked it is...

As many of you know - or at least those who follow my blog(s) know - I am building a data center company. The most important part of a data center is where you put followed closely by how you get to it and how others get to it - both physically and logically. The one thing I have experienced more of this year than in years past is how overlooked or non-sensical the site selction process is for people.

When I look at data center site selection I look at several variables - seismic, flood, environmental, natural disaster stuff, how close networks are, if they are not in the building, access to clean, reliable electricity sources, the talent pool, and overall market - is it a good place to do business. It is a little different for customers in that many times they choose a market, search available space and take what's ready to go in their time frame and trust that the facility has made a lot of the decisions for them in selecting where to put their facility. Commercial Real Estate Brokers are often engaged to help in this process. The brokers are also involved in selling data centers and buildings that could become data centers to people like me.

So you can imagine the head scratching (and some giggling) I do when I get calls and emails from people inside and outside the data center business pitching me on their 'prime data center' sites and they have NO IDEA what they are talking about. I will give you some of the more interesting sites I personally have been to and point out things that while obvious to me, were clearly overlooked.

I spent a day in Chicago in the early summer driving around and looking at sites with a good friend of mine in the design/build part of the business. We went by Microsoft's container site and that was pretty cool, and then he brings me to a site that had a banner on the front of it announcing it was going to be the next data center facility for Ascent Corporation, the guys who helped build the Microsoft site. Turning down the access road, I saw several cones and saw horses in the major road leading to the turn off and they were srrounded in a couple of inches of standing water and it looked there was a water main break the night before and I asked my tour guide if it was in fact a water main break and how it effected the Microsoft building. His response was 'no water main break, just 2 days of rain off an on'. Wow, three days after the last rain and water was still leeeching out of the ground and creating hazardous conditions on the only way to get to the facility. As we drive down the puddle filled secondary road, there is a large rail yard that had to have 10 tracks across it. There were few cars on the tracks, but it looked like it could support hundreds of them if needed. I popped the question 'What is the story with the railroad tracks?'. I was thinking it was an abandoned switching yard from the hey day of the Chicago railroads. The response was 'Oh that's an active switching yard that has been used for 50 years to switch up cars headed downtown', I replied 'Really? What is in the cars that go through here?'. My guide told me there is 'lots of stuff that goes through the year as it is a major switching point for the rail system'. Hmmm. Interesting.

We finally reach the building at the end of the street and it's tired, but workable. As I have learned, a lot of stuff is workable if you have a lot of money behind you. Then I look at the parking lot that abuts the track bed. The crushed stones were coming through the chain link fence and the closest track was maybe 20 feet from the edge of the parking lot. The parking lot had water in 60% of it and it looked like Lightning McQueen had dragged the statue of Stanley around it for a while. My guide says 'So this is the facility, what do you think? I heard they got a good deal on it.'. Well good deal or not they paid too much. I responded 'You really want to know what I think based on what I have seen so far?', he replies, 'yeah, I do'.

I launch into my assessment: This would be the last place I would put a data center in Chicago. Why?

- The roads get washed out after heavy rains making the site inaccesible unless by canoe.
- My employees couldn't get out to take care of their families
- There is a very active rail yard for a half mile before the property with no idea of what is in the cars - or what has been in the cars the past 50+ years.
- The site could be on a Superfund list from a spill that happened back before OSHA or the EPA was around to investigate such things
- The site could land on a Superfund site of one of the  very rusty tanker cars decides to leak toluoene, or some other banned substance
- There could be an air quality emergency if one of the said tanks bursts emitting chlorine gas instead of milk
- The chain link fence, while adequate to keep deer out of the yard, would not keep taggers from spray painting the building or other folks who had no business being there

He interrupted me and said 'Yeah, I didn't think it was a great site either'. Nuff said. However I have heard that Ascent is moving forward with development plans and I would love to see how their anchor tenant will justify to their board why putting mission critical assets in a site like this is lowering risk.

Another one of my favorite areas to point out the lack of decision making intelligence gathering, is Ashburn Virginia. Yes it is one of the hottest data center markets in the US, great relaible power, MAE East is there so you have hundreds of network connections to tap, and the talent pool is deep. Very popular site for a lot of companies who operate data centers and who lease space from the data center operators there. It made sense to me too, until the first time I flew into Dulles instead of National airport.

I was on a Jet Blue flight from Boston, and I had a window seat on the right side of the plane. On final approach I look out the window to look at the beautiful countryside. As we get closer to touchdown, we fly RIGHT over the Dupont Fabros and Digital Realty Trust data center cluster. I was shocked. Why in the hell would anyone put a data center cluster in the flight path of a busy airport? That couldn't be right. So I got my rental car and drove out there. I parked on the road by Digital Realty Trust's campus and watched the sky. Every 90-120 seconds there was an airplane overhead. All different kinds - 737, cargo planes, regional jets all flying over the densest interconnection point in The Eastern United States. Big WTF moment.

Why tell you this? Because it shows in vivid in your face detail that site selection - even when validated by companies who run data centers for a living -is important and what would appear to be a good decision, isn't necessarily a good decision. It seems as if people are more concerned about how long it will take to get to a site than how far away from bad things it is. When you base a major decision like a data center site on how easy it is to get a sandwich for lunch vs. whether or not you will be physically able to get to or away from your facility, you (and your real estate people) are not doing yourself ANY favors and putting you at greater risk not less risk.

If you are looking for data center space, before you book your next tour do yourself a favor and pay attention to the overlooked stuff - roads to and from, the surrounding area around the facility, in the air, manhole covers, and think like your worst nightmare. If someone ships a box of Anthrax to downtown Dallas, how many people need to evacuate and can the roads handle it? If someone puts a device in a cargo plane bound for Dulles and is parked where I was with the cell phone trigger, is that an acceptable risk?

If you want help or want a site selection guide - email me - bytegrid@gmail.com and I will send you a free copy of my site selection document. I will save you time and money before you even leave your office.

Friday, November 5, 2010

Dell and its Data Center - WTF?

http://www.businessinsider.com/what-does-dell-want-with-a-new-data-center-2010-11

I was reading this article and several others and the coverage from outside the industry is profound, and really doesn't tell a story, and speculates on things that don't make sense to me and other industry peers.

The factual news part of the Dell deal is that they bought land in a data center market where there are other data centers, and access to abundant cheap power, that part I get. It is also an active seismic area and long term that is not such a smart move to put a facility there unless you have it interconnected to other facilities on other grids, in other non seismic zones. To put one  major facility in an active seismic zone indicates that their risk models either figured out that the power costs are cheaper than the expense of the facility, data loss and brand damage, or their site selection process overlooked the ground moving a lot and if you have one facility in one place with 100 different networks coming into the building then you will stand a better chance of losing all 100 networks AND the building in a major earthquake. Buh bye, on a big scale.

Does Dell have other data centers? Absolutely. Do those facilities have the capacity in them to take the entire load of the Washington facility in the event of an earthquake? I would hope so. If not, they may want to have another site in the works that could provide the deployed airbag like instant saturation of network connections (if they are even available), compute and storage load transfers that will occur.

If Dell wants to get into the hosting space, they need to get people who understand that business, from site selection to SLA's, otherwise this facility will be a bigger Dell box to put smaller Dell boxes in, and become one of the least cost effective sites to deliver those services from. Which might make sense if it was being built for a Government tenant... that I could understand.

Wednesday, October 6, 2010

Equinix Guidance...

I thought this was an excellent quick & dirty assessment by Lydia Leong over at Gartner about Equinix Guidance being lowered and the ensuing stock tank, that also dragged the sector down with it. It shows me that investors don't understand the data center market other than 'it's a hot sector'. I wonder how many investors stopped to relaize that there are sub-sectors to the marketplace namely retail and wholesale players. This gives us a peek at the sub-sectors and outlines what investors need to pay attention to.
 
Equinix's reduced guidance
Lydia Leong | October 6, 2010 at 11:15 am | Tags: colocation, EQIX | Categories: Infrastructure | URL: http://wp.me/pj3To-ah
A lot of Gartner Invest clients are calling to ask about Equinix's trimming of guidance. I am enormously swamped at the moment, and cannot easily open up time slots to talk to everyone asking. So I'm posting a short blog entry (short and not very detailed because of Gartner's rules about
how much I can give away on my blog), and the Invest inquiry coordinators are going to try to set up a 30-minute group conference call for everyone with questions about this.


If you haven't read it, you should read my post on Getting Real on Colocation, from six months ago, when I warned that I did not see this year's retail colocation market being particularly hot. (Wholesale and leasing are hot. Retail colo is not.)

Equinix has differentiators on the retail colo side, but they are differentiators to only part of the market. If you don't care about dense interconnect, Equinix is just a high-quality colo facility. I have plenty of regular enterprise clients that like Equinix for their facility quality, and reliably solid operations and customer service, and who are willing to pay a premium for it -- but of course increasingly, nobody's paying a premium for much of anything (in the US) because the economy sucks and everyone is in serious belt-tightening mode. And the generally flat-to-down pricing environment for retail colo also depresses the absolute premium Equinix can command, since the
premium has to be relative to the rest of the market in a given city.
Those of you who have talked to me in the past about Switch and Data know that I have always felt that the SDXC sales force was vastly inferior to the Equinix sales force, both in terms of its management and, at least as manifested in actual working with prospects, possibly in terms of the
quality of the salespeople themselves. Time is needed for sales force integration and upgrade, and it seems like the earning calls indicated an issue there. Equinix has had a good track record of acquisition integration to date, so I wouldn't worry too much about this.

The underprediction of churn is more interesting, since Equinix has historically been pretty good about forecasting, and customers who are going to be churning tend to look different from customers who will be staying. Moving out of a data center is a big production, and it drives changes
in customer behavior that are observable. My guess is that they expected some mid-sized customers to stay who decided to leave instead -- possibly clients who are moving to a wholesale or lease model, and who are just leaving their interconnection in Equinix. (Things like that are good from a
revenue-per-square-foot standpoint, but they're obviously an immediate hit to actual revenues.)
This doesn't represent a view change for me; I've been pessimistic on prospects for retail colocation since last year, even though I still feel that Equinix is the best and most differentiated company in that sector.

Thursday, September 30, 2010

The New Mod Squad

Lee Technologies has released an interesting whitepaper this morning discussing modularization and introducing not only their approach but their solutions - at least high level. Steve Manos discussed his changing belief system in a blog post, which I thought was spot on.Change is in the air, as Steve points out, and I have always maintained change is fun to watch and even more fun to be a part of. That does not mean it is painless however - it seldom is. Change is a response to feedback - physical, mental, spiritual - that causes us to shift what we are doing and have 'always' done. 
Having been in the data center space for 20 years, I have seen and been part of a lot of change. I was an early blogger on containers and that shift in the industry toward more efficient data center operation, and invested a fair amount of time in sitting with the vendors to understand how what they made were similar and how they were different. The change they were introducing made sense. What they were saying and doing was logical. Like a lot of other things that make sense and are logical and increase awareness - for that awareness to reach the point of changing behavior takes a while, not to mention a few skinned knees, and other painful lessons. 
One major thing that was missing is their understanding of the entire ecosystem - not just their part of it - so they didn't understand that as great and efficient as they might be, unless you have a facility (or utility/systems) to hook them up to, you aren't done yet. You just sold a laptop without a battery or an iPhone without service, not the whole package that makes it all work.The data center industry, and the other parts of its ecosystem (power, network, water, facilities) are starting to change. Companies are deploying new solutions based on the logical assessments and operational experience of what they have done before (good and bad), and it is reinforced by carrot (cost savings) and stick (coal backed electricity) adding fuel to the building momentum. 
The change is slow however, and I want offer an answer to the question 'Why?'
The dot com boom fueled a massive boom in data center development starting in 1993-1994. Money was pouring in, and as more and more data centers were built around the house of cards (logic went out the window - you don't stay in business without actual profits) more money kept being thrown at projects that wouldn't be done for years, and then you still had to fill them up with customers. The investors to a large degree didn't understand the data center business as a whole, they knew it took money which they had and knew how to work with. They didn't understand what it took to sell and operate them. 
Those who jumped in without understanding the nature of the data center business aside from the real estate or money part, and followed the herd mentality got pinched. When they got pinched, they flinched hard, and abandoned projects and rather than owning their mistake (not understanding what they were investing in), they abandoned their projects and went on to express their opinions on why everyone else was foolish to be in that business. They had a chair when the music stopped. 
The sentiment of data centers as a business from the financial community soured. They were not such a hot ticket in 1999-2002 when the bubble popped. Now I am watching the data center business become a hot ticket again. Amazon, Ebay, Microsoft, Facebook, Twitter, and other internet companies, including 'cloud' companies started the pendulum swinging back the other way to make data centers the 'new black'. I will point out that these companies who started the changes, made the changes for themselves based on their requirments only, not the requirements of hundreds of customers with gear in the same data center. When you only have to satisfy one end user vs. hundreds, things are simpler.What has also changed is the influence from the data center ecosystem in data centers themselves. 
The tree hugging dirt worshippers in the 'Green' movement have flexed their PR muscle (Google 'Facebook and Greenpeace'), the US Government has mandated energy conservation and lower carbon footprints for their data centers and builders, architects, and owner operators have flocked to LEED designs for projects. In short, logic backed decisions influenced by awareness of what we can do differently as an industry have begun to change behavior. First in single tenant facilities, and now the behavior change is spilling over to multi tenant facilities. 
The behavior change is manifested in customers willingness to look at different solutions, so long as there is not a giant chasm to cross in being different is ok. It is up to us in the industry to continue some degree of missionary work in transferring knowledge about what we have learned to our customers and why it's important to them. We have to know the entire ecosystem, not just a piece of it. 
We also need to explain why things like watts per square foot and rents based on square feet don't matter as much today, even though they have been standard selection criteria for decades. We need to explain that basing a decision on which data center to lease should be based on things like how accessible will it be if a dirty bomb goes off and the city is in lockdown, or what will the impact be to our production environments if a jet drops on top of the Ashburn cluster.  
We are a part of a data center ecosystem - ultimately part of customers' ecosystems - and having a solid logical framework is the basis of what ultimately drives a right decision, not the greatest number of checkboxes checked on an RFP or how easy the facility is to reach from the airport for a tour.Modularization is a culmination of best practices, smarter execution, and practical application of components of that ecosystem. 
Modular solutions are the physical representations that show us that each component which is part of the system can be inegrated better - or in a different way - to satisfy the most requirements for all customers, not just one and deliver more carrots from the experience with sticks.
Container solutions provide flexibility of deployment in the event of a disaster, modular builds let you eat the IT infrastructure elephant in bites, and provide the broadest set of options to satisfy the greatest number of requirements - and the only one that matters - the customer... 
To be part of the Mod Squad - ping me - mmacauley AT bytegrid DOT net...

Wednesday, September 22, 2010

Are We Ready To Build a Data Center Around a Cabinet?

I decided to blog about this because a friend of mine pinged me about Elliptical Mobile Solutions who has been receiving a lot of press because they are part of a few finalists' designs for EBay's new data center in Phoenix.

I met Richard Topping and SharylLin Bancroft of Green Data Solutions by literally bumping into them at a data center event in Santa Clara in December last year. I was at CoreSite helping to develop the Cloud Community AKA Cloud Center of Excellence with NASA as a charter member. I was talking to a couple of colleagues and they bumped into me trying to slide between me and another group enjoying cocktails and data center banter.

Needless to say I 'got' what they were trying to do and their solution (the RASER) is compelling. The downside is that it changes the economics for the data center owner operator because the RASERS are high density and self contained. You don't need a chiller plant to run them, just power and a sturdy building with telecom. The issue is that data centers are built with chiller plants, raised floor, air handlers, air conditioners with humidity systems and are designed around traditional cabinets. So back to the question - are we ready to build a data center around the cabinet?

I personally believe that the answer is yes, with some major caveats:

- the owner operator must have access to power and a sturdy building WITHOUT tens of millions invested in 'traditional' cooling infrastructure, raised floor, etc.

- The owner operator must be able to spend CAPEX on the cabinets to the tune of 70-100K each and collect it back over time

The challenges facing the adoption of the RASER (and others like it) are based in these two issues. An owner operator knows that there has been demand for traditional raised floor space, and because there is a mature market for it, it makes a safer bet for the financiers backing the data center build outs and retrofits. Owner operators get their buildings & projects financed building based on watts/SF, ability to cool the floor, and the mechanical & electrical systems to insure uptime. So for an owner operator to walk away from this model is risky - they run the risk of stranding a lot of floor space and equipment by implementing the new RASER cabinets. All this in spite of the efficiency owner operators all claim they are chasing reflected in PUE chest beating.

Now for an owner operator who has a building with ample power and was planning on building out raised floor environments/computer rooms in a traditional sense but has not purchased the infrastructure to operate it, their luck could not be better. Here are the rough numbers:

On a 1,000 cabinet facility they will plan for standard density of 4.8 Kw/cabinet - 4800 Kw or 4.8 MW. At ~3k per Kw all in it costs ~$14.5M for the project. The rent they plan for all in is ~$2,000 per cabinet per month. All in, fully rented it throws of $2M/month. Filled with 36 month term tenants that is $72M/36 mo. term. If the PUE is 1.8 that means for every dollar they spend on electricity 80 cents is added for inefficiencies (overhead). Electricity costs are .8 * 4800 * 730 (hours in a month) = ~$2.8M which is a pass through to larger customers.

Using a RASER deployment of 1,000 cabinets equivalent the numbers look like:

4800 Kw/12 KW/cabinet = 400 cabinets @ $70K = $28M. The PUE on a RASER is 1.2 so the facility saves 60 cents per Kwh of electricity. On 4800 Kw, the lower PUE gives you .6 * 4800* 730 (hours in a month)= $2.1M/month. Over 36 months the savings are ~$72M.

So 14.5M + 100.8M in electricity - ~116M 3 year cost on a traditional facility vs 20M + 25.2M = $45.2M.

Another way to look at it is per sq ft. 4.8MW supporting ~50K ft. At $2500/ft to build or $1800/ft to retrofit for traditional space is ~$125M build or $90M retrofit. With RASERS you need a facility of 15K ft, and retrofit only you're at ~$27M.

So are we ready to build the data center around the cabinet yet?

Thursday, September 9, 2010

Data Center Modularization - The new New Thing?

Recently I have followed a number of discussions, articles, and presentations about modularization of the data center. First was virtualization and next seems to be modularization. Is this cloud all over again? Repainting existing technology approaches as new?

I saw a video that was at Data Center Knowledge this morning introducing us to Dell's new approach to 'modularization'. I commented on the video and wanted to expand on my thoughts here.

Dell talks about their solution - as do others - as being able to deploy compute and storage resources quickly. Plug and play if you will. Dell says they can deploy within 30 days at a 'ready site'. The issue is that you still need to have a site/building to put them in that is ready to go. By ready to go I mean water infrastructure, electricity, fiber/telco, and security in place. And it all has to be redundant. The other distinction that needs to be publicized is that the Dell solution is NOT a mass production offering. It is a design and deploy offering, not an off the shelf offering.

This to me is not modularization, it is layout-ization. You are controlling the layout of the data center floor and in other cases the data center itself. What this 'new' approach does not address is the more difficult tasks of finding sites that have the infrastructure available - power substations that are accessible with capacity, necessary water for cooling, no less than 5 telecom/carriers accessible, and risk profile lowering physical security - not in flood plains, not in flight paths, 40 miles from population/city centers, 18 miles away and not downwind from nuclear power plants, and the list goes on. Once these requirements have been met, then you need to look at permiting in localities, which is not an easy process. Then construction schedules, budgets, and finally breaking ground around the weather.

Most of the 'new' approaches are for single tenant facilities. This is an important distinction because with a single tenant facility, there is one set of policies, rules, and group of decision makers to satisfy. In essence you only need to satisfy the requirements for one customer - the internal one. You can argue that there are multiple customers, however at the end of the day, it is one company's budgets and one cpany's policies and needs that drive decisions.

In a multi tenant facility, you have potentially hundreds of customer's requirements and decision processes and policies to contend with making the challenges numerous and all over the map. Densities vary, what phase of power, AC or DC, which telcos are important, circuit management, cabinet hardware, cage or no cage, private suite, and cooling these various deployments. What has happened is that the multi tenant facilities design facilities that work in their business model and for the potentially greatest number of tenants and the greatest number of requirements. Customers either find what they want in the facilities or keep shopping until they do.

Layout-ization can help with this because they can buy or build facilities with the requisite infrastructure, finish it off the way customers want it and create the win-win. However, this assumes that the other pieces are in place (permits, sites, etc.). The other issue is that when a data center operator buys a facility, it is a 20 year asset and technology changes many many times in 20 years. Raised floor vs slab, 50 watts/ft vs. 200 watts/ft, blades vs. pizza box, gig E vs. 30 gig, Tier calssifications that are the 'must have' Tier classification. You get the picture.

So what is the solution?

I think that the data center operators who 'get it' will adopt new deployment models for their facilities themselves and the layouts inside them. Watts per square foot don't matter as much after the short list is made by customers, only whether or not an operator can deliver to the customer requirements now and in the future. Tier classifications are being restructured, and existing space gets harder and harder to come by every week. This means that choices for customers will be made by what is available, vs. what is right for the customer.

There is a small window of opportunity here for investors to create the next model with data center operators who saw what has happened over the past 20 years and know it needs to be different for the next 20, and are willing to team up and build it.

Monday, August 30, 2010

Note to Cloud Service Providers - The Network is The Cloud

I was catching up on some reading this weekend nursing a sunburn, and was absolutely blown away by the sheer number of 'Cloud' companies and what they are offering. I have covered Cloud in this blog repeatedly because I don't get it. I don't get what Cloud does that Managed Services Providers do with a reconstructed SLA. When I read in an eWeek piece that we need to 'stop defining what the cloud is and look at what it does' then I knew there are a lot of other people who don't get it, who want to get it, and since the industry can't come up with the single *right* answer, we call in Officer Barbrady to manage things 'Nothing to see here. Move along people...' and hope to move on without a definition in place.

Would you pay for something without knowing what it is AND does? In the words of Max from the movie The Losers 'it's like giving a loaded gun to a six year old Wade, you're not sure how t's going to end but you're pretty sure it's going to make the papers.' Is it a Zen thing? The cloud is, but will not be...? What is a simple truth? What are any of us sure of?

I am sure of one thing - that networks are the most overlooked component of the thing referred to as Cloud. Why? Simple? Imagine if there were no public roads between the airport and the car rental places, only taxi approved roads, or that you had to pay $10 to ride teh shuttle bus. You would incure a small fortune getting you and your stuff from one place to another to pay for the priviledge of renting something for as long as you needed it so you could move around.

I could play with that metaphor for a long time and won't, it's Monday. What I will say is that the Cloud providers need to understand networks so that they stay competitive, provide more value than the next guy, and if nothing else - they could offer the same pricing as Amazon's service and capture 38-48% more margin.

That should make the papers...

Monday, August 23, 2010

Everything is Bigger in Texas...

Including outsourcing deals... IBM and the State of Texas are trying to re-negotiate their contract in the press instead of with each other. This is like divorcing in public folks - you think it's ugly now? Just wait.

There is $863M on the line and so some big rocks are being thrown between the two. The finger pointing is expected, the lack of leadership is not. Why someone on either side isn't saying (like a parent) 'I don't care who did what to who, figure it out and don't come out of your rooms until you figure it out!'.

here are my thoughts on the matter as an outsider who has been involved in pitching and delivering large deals:

1. Both sides need to get two people - preferably 'Juice Guys' (people who have authority to make and keep agreements) at the table and look forward as to where the State wants to go and what they need to do. I suspect a lot has changed given the fact that there have been so many issues and things the State thought it wanted may not be on the list anymore.

2. IBM needs to automate as much as humanly possible with the project. They are a HUGE company with a lot of PEOPLE. Folks, people, humans - that is the problem. The fewer people they use on the project, the higher the degree of success will be. Technology is used to execute business process. If the team focuses on the process that needs to be in place when all is said and done, they will be infinitely more successful. Software is then deployed to support the right process. It's a tool, not the solution.

3. If there are additional questions, call me. Not the people who got you into this mess in the first place Texas. I love this comment:

While citing its disagreement with DIR”s management and characterization of the contract, McLean said that ”should DIR decide to move forward with re-procurement of all or a portion of the services, IBM remains willing to assist DIR in that process.”

Why wouldn't the fox be ready to go back, invited, into the hen house?

IBM, you f'ed up. Own it, fix it (or not) and make the customer happy. You positioned yourself as the expert, and the State has said you fell short.

Texas - sit down with the stakeholders again and plot a new plan for someone new to execute. Remember to eat the elephant in bites, as migration and consolidation is a process, not a scheduled event.

Any questions? Call me or send me an email - mmacauley@bytegrid.net and I will do what I can to help. Even if it's only keeping the fox out of the hen house...

Tuesday, August 17, 2010

Do Watts per Square Foot Even Matter Anymore?

I think about this question a lot, and it is one I am asking more as well to customers and peers in the industry. There are hundreds of facilities that were built 20-30 years ago and a few dozen that have been built in the past 10 (remember the .com bubble?) and I am being conservative. They were built with a solid approach for 20 years ago - build a building using a watts/ft metric to calculate costs, density, and ultimately capacity. Remember that Wang, Digital/DEC, and IBM mainframes were the systems of choice. Then the PC came on the scene, and drastically changed the dynamic of computing. Client/Server, new network protocols, ARPA, TCP/IP and http - the evolution was on.

So too were manufacturers evolving - building smaller footprint, higher capacity machines. These machines were 2-3x as powerful as their predecessors, taking up half the space. Companies could consolidate their huge footprint monlithic systems inside their data centers and make room for more. Or could they?

Today, to me it seems like deja vu all over again.

The computing world has shifted again by using virtualization, and those smaller more powerful servers, just got smaller (so small you can't touch or see them), went virtual, and companies are embracing the next evolution with gusto. They are also hitting the wall that IT managers who are retired or dead hit decades ago - power.

Why is this happening again? In a word - heat.

These smaller machines draw more power, and generate a lot more heat for their footprint. Ask a multi tenant data center operator who services the retail market on a cabinet by cabinet basis and they will tell you that cabinets full of gear that draw 4.8Kw are going in next to cabinets with 10-14 Kw. The higher density cabinets generate a lot more heat that needs to be removed from the floor so all the densities of the servers with their heat signatures can be maintained properly.

What is intrinsic of a 20 year old facility that makes it better to satify growing densities, evolving computing applications and hardware, and whatever comes down the pike? How does the colocation market evolve at the same time? When do we hit the wall, or do we hit the wall? Can you simply throw more air conditioners/handlers on the floor? Can we just continue to push utilities to increase generation and distribution? What happoens to the cost models?

I am starting to think that we are at a crossroads. Why do I believe this? Because data centers are starting to evolve. Containers, modular buildings, more flexible options. Until the hardware manufacturers get together with data center operators and do some R & D in a meaningful way, share roadmap, do testing together, we will continue to build the way we have, and not the way we must.

Want to play together - ping me. I want to figure this out...

Tuesday, July 27, 2010

To Live and Die in LA

http://www.marketwatch.com/story/google-misses-deadline-in-high-profile-la-deal-2010-07-23?dist=countdown

Note to Google and CSC, both companies I enjoy a working relationship with:

The Cloud is an accounting puzzle, not a technology one. Add up the costs of of hardware, software, bandwidth, secure routers, switches, the whole nine. Divide by the number of employees and there is the cost per employee. Divide this by 730 hours per month and that is the cost per hour per employee.

Security is an issue in any network that needs to connect to another network that connects to another network because you lose control of policy and enforment. The way to deploy a cloud solution is to add up the costs, divde by employees, then by hours. There are implementation costs and 'day 2' costs which are the caring and feeding.

The cloud is a collection of IT stuff (hardware, software, network, people) that we have been deploying for 30 years. Cloud is a billing application. Get the beancounters in to help with the clean up and the software jockeys to write an open source billing app and be done with it.

Monday, July 26, 2010

New Use for Data Centers...

I got the idea from watching the marijuana dispensaries start to pop up across the country and given my line of work in the data center business I had an 'A-Ha' moment.

Grow pot in old data centers

There is adequate power
Raised floors could support water lines for irrigation
The temp and humidity are already controlled so the infrsastructure is there
The facilities are secure
The biggest expense would probably be putting in the right kind of lights

Maybe someone already of this and I am half as smart as they are. Either that or I just figured out a way to make use of the Federal Consolidation of Data Centers without having big empty facilities mothballed.

Monday, July 19, 2010

Nebula comes into its own

This was a project I worked on in the background and I am thrilled to see it come into its own and be part of OpenStack.

Congrats to Chris Kemp and behind the scenes -Bobby, Josh, and Kim - you got it done.

Monday, June 28, 2010

My Cloud Math - Flawed or Flawless?

Dell PowerEdge R900 - $7500
Red Hat Enterprise Linux Support - $1200/year
Draw @ capacity – 1.5 KW per unit
4U = 9 plus a switch 1 Kw
Draw = 14.5 Kw per cabinet at load *.8 = 11.6Kw
$4500 for VM ware per cabinet

Costs:
2,175/mo rent per cabinet, 78,300 for 3 year term
67,500 hardware amortized over 3 years
10,800 for SW = 32,400 for 3 years:
3 year total – 177,900
Per mo: ~$5,000 per cabinet for a 3 year term

So on a per cabinet basis, you need to do ~$6,00 month in revenue to make it work at low margins.

Using Amazon EC2, @ $1.20 per hour you can bill $28.80 per day per instance. At 16 VM per core and a six core box is used, $2,764.80 per box per day at max utilization (16*6*28.8). At 9 per rack, that's $24,883 per day if the rack is 100% utilized. If the rack is utilized 8% then $24,883*.08= $1990.66 per day.

That's $730,000 per year per rack if you hit 8% utilization.

How does this check out with what you're seeing?

Wednesday, June 16, 2010

Fuel Cells and Green Tech in the Data Center

An industry peer, Ken Jamaca from Silverback tech gave me a heads up on an article whose subject is one of great interest to me - Fuel Cells in the data center.

One reason it is of great interest is that the current electricity grid in the US is resilient, barely, and has been for some time. Adding load to it does not help the problem, and given the lack of a US Energy policy generation is hard to come by and getting new power plants approved are even more difficult. Not great news to folks looking to construct new data centers as data centers are about access to power. Period. The other reason is that I am embarking on a new data center venture that will bring a new data center platform to the market.

Wind has generated the most buzz and been deployed the most in the US as a solution to increasing generation capacity and giving us more electrons to consume in a 'green' way. Wind is great - so long as the wind is blowing and since it is inconsistent (unreliable) as a power source, and it's expensive to store the electricity, it is used as a mix of total power available to customers. Solar is also a solution that has generated a lot of buzz however the land area required to put PV panels up is so large to be meaningful to a data center that many solar installations on roofs of large data centers and commercial buildings only provide enough electricity for parking lot lights and ambient lighting in hallways and common areas.

Fuel cells are getting more attention and the technology seems to be advancing as fast as it ever has. Bloom Energy has garnered the majority of the buzz in the market, and UTC (United Technologies) to a lesser extent, and there are some new entrants that have reached out to me to discuss their approaches and solutions.

As I have gotten into the nitty gritty of designing a system that is reliable, off grid, and can support a multi megawatt installation, some interesting things have popped up that have nothing to do with the technology, but have to do with a 100 year old set of policies, tarriffs, and agreements that make it difficult for a company to adopt the technology, especially a data center that wants 5 nines. Let me explain a little...

One major factor is utility costs per kWh and kW. Many utilities have 'demand charges' and 'stand by' charges that will impact the economics of the approach and drive design of a fuel cell system. Some also use what is called a ratcheted rates or tariffs. Should the data center need to transfer from fuel cell to the power grid/utility for backup, the demand charge is the charge per month for the next 12 months.

So say you’re paying $5,000/mo demand charge every month for 500 kW for a computer room. Then the fuel cell system trips off for even a few seconds, the full kW demand of the facility is now the current demand for the next 12 months. So say that 500 kW goes to 2,500 kW at $10.00/kW the new monthly charge is now $25,000 for the next 12 months. Ouch.

Developers need to play close attention to the tariffs and the structure of where they build these systems. The utilities clearly do not want onsite power in many cases and write the tariffs deliberately to keep out self generation. In some smaller municipal utilities they may welcome exporting excess power to the grid as a peaker during the summer months to help carry load on the grid for when every home turns on air conditioning during a heat wave. In other cases they may not want these solutions in their service area due to the adverse impact on transmission capacity on the local circuits and upgrade costs to meet the need should the facility go down.

So a word to the States, counties, towns, and municipalities looking to attract data centers, jobs, and tax revenues: Electricity matters more than anything.

The ability for a facility to generate power for themselves, especially low to zero carbon footprint power, and still be served by a utility as back up vs. primary power flips the 100 year old electricity business model on it head. This creates a threat for the utilities, because there are now competitive technologies made by companies that aren't restricted by a 100 year old business model, and will serve customers faster.

While the cost is still high for these newer 'greener' systems, the price comes down as adoption prolifererates. The confluence of technology, its incresing reliability, and the ultimate development of an alternative electricity ecosystem will seismically shift the models and markets.

I don't know about you, but after 100 years, I am ready to try something new.

Monday, June 14, 2010

The jury is still out....

I was catching up on some Monday morning reading and saw some covereage for a new low power server from SeaMicro and was left scratching my head.

I am not a server guy. I am a data center guy and I try to pay attention to systems and how they are designed and work together. I always find myself asking 'so what?' when I see a new product or service because at the end of the day I really don't give a sh*t how cool/new/groundbreaking something is, I want to to know why it's better than what I have, so I always ask - so what?

From a single tenant data center operator I can see that this is interesting. Lower cost to do more computing = good. For the multi tenant data center I don't see the good. Here's why:

Equinix sells a lot of cabinets. 4.8 Kw cabinets. So if you buy these SeaMicro servers that pack a full cabinet worth of compute into a 1/2 cabinet, you just made me double the density for 1/2 the space. Not Good. Why?

Because a data center operator needs to get power provisioned from a power company. To do this load letters are filed which is a formal request for more power to a site. Let's say that the load letter is for 13 megawatts in a 100,000 square foot building. The building consists of 10 computer rooms each with 1.2 megawatts. That 1.2 megawatts is designed for 4.8 Kw/rack, not 8Kw as the SeaMicro SM10000 are.

So now my electrical load can only support 5,000 feet of computer room space fully loaded leaving me (or the customer) paying for the other 5000 feet or stranding it, or waiting for more power assuming its available. It breaks the model and isn't as attractive as something that fits in my business model like a 4.8 Kw rack.

These are like containers. Containers break the model of a traditional data center in that for a ~600 sq. ft. footprint, they can consume 500Kw/half a megawatt. So instead of a 5,000 ft room, you have a very dense 600 sq ft box and a lot of extra space. That is good if you like paintball and want to set up a paintball field inside for when you are provisioning servers, but for a business, it's a tough sell.

And as a result,Seamicro may never get to the part of the pitch about lower operational costs.

Thursday, May 27, 2010

The thaw seems to be on...

In the past two days I have been fielding a ton of phone calls asking about my new venture and do I need money. I can only suspect that there is a thaw happening in the capital markets and investors sitting on cash earning single digit returns are starting to get impatient and looking for low risk investment sectors.

The sweet spot for investment in the data center markets seems to be the $200M-500M range and I can't figure out if that range is based on the numerous IPO filings by Telx, CoreSite, Interexion, etc., the M&A market with deals between Cyrus One and Cincinnati Bell, ViaWest and Oak Hill, etc. or if there are enough people to know what it takes to buy into the space and given where the bar is, not everyone can or wants to jump in (yet).

Expansions and new builds are being announced weekly which means there is more inventory coming to market. However, it won't be instantaneous inventory as the projects I have seen announced and others I know about from my Colo Mafia are all 9-24 months away from having space ready to move into.

So What?

This means that if you need megwatts now, the market is tight. It also means that there will be more sites to put into a site selection process and having people you can turn to with expertise about the operations, the systems, a company's innerworkings, will also be even more important. The real estate brokers are aware of this fact, and I see many establishing data center practices - with real estate folks, not data center folks. Real Estate guys for the most part don't understand the systems of a data center, why density is variable, why N+1 can mean different things to an ops person vs. a sales person.

Do they understand power distribution to the site and inside the site?
Do they understand telecom density? Peering and why that's an advantage?
Can they tell you why a facility is better than another or do they Google all the data centers in Ashburn VA and tour you through 6 of them when you really need to focus on one or two?
Do they understand what the data center needs to support?
Will they bring you to an NSA rated facility when you really need a fireproof locked filing cabinet?

With more inventory comes more choice. We love choice (usually). I use a simple formula to understand site selection:

number of decision makers/influencers * number of facilities = number of things you will need to sort through and spend time and money on

Lower numbers are better. I have encouraged companies to look at 1-2 facilities instead of 6 and spend the savings on bigger bonuses or an offsite meeting in St. Lucia in February vs. flying in 5-6 people multiple times for multiple site visits. It is a waste of time and money better spent on laying out the right facility than choosing it.

Tuesday, May 25, 2010

Pretty sweet little piece of technology...

This morning in between calls to launch a new company I had the pleasure of checking out a new technology that is actually useful (at least to me) in the data center.

Viridity

Never heard of it? Me either, but a former colleague who knows I have been in the data center business a while says I should check it out. So I do.

After 10 minutes I wished I had a 50,000 cabinet data center to plug it into. Seriously. So what is so great about it?

First off it'll auto discover physical and virtual machines and where they are. Then it tells you utilization of the physical server. Then (drum roll please) it tells you the POWER DRAW of the machine and the VM's on it. So what?

For the data center operator:

- It lets you see where the power hogs are
- It lets you see where hot spots are or will come from
- It lets you insure that a customer (in an MTF - Multi-Tenant Facility) isn't in violation of a license agreement
- It will let you know when someone is close to their upper limit and put a call in before breakers trip
- If you want to consolidate a computer room it will let you know EXACTLY what you have running, what's tapped out, what isn't, and what the power needs to be in the new facility
- It lets you locate unused resources quickly and flag power hogs you may not know about

For STF (Single Tenant Facilities) or for customers in MTF's:

- It lets you see what is actually being used for power and whether or not charges are accurate
- It lets you plan deployments and consolidations so you know what is provisioned by facility, rack, or machine.

All in all very similar to Power Assure's product and probably worth a bake off...

Friday, May 14, 2010

Change of Heart Gartner? Confusion?

I picked a bone with Lydia Leong at Gartner over a post she had in her blog essentially saying there wasn't a space issue in the data center markets. Today I see that Ramesh Kumar at Gartner says the EXACT OPPOSITE that power, space, and cooling are scarce.

I should also point out that yours truly made the very point that Ramesh did on Lydia's blog over a week ago. What changed?

So it got me wondering - is there just a ton of confusion over at Gartner about the data center market? How they cover it? It translates to me as a customer - you lost some credibility on this guys, please get it straight, you're supposed to the best...

Did Gartner do their homework after the first blog post on the state of inventory in data center markets and came to realize they really were tight?

Do they really understand what is out there, the differences, the nuances, the real deal in data centers from the systems perspective?

From density perspective? From the virtualization perspective?

I do. I will keep on sharing what I know to those who to understand what is and isn't out there for choices and arm you with knowledge to help make the best decisions.

Thursday, May 13, 2010

If I hear any more Spin about The Cloud...

I don't know if anyone remebers the Simpsons episode Marge vs. the Monorail, but it could likely be overdubbed with the cloud, and we would have a humorous look at the noise that cloud has created.

I go back to my post from January 29th 2010 about my acronym for the Cloud:

Completely
Ludicrous
Offerings of
Unsustainable
Diversion

Folks, I hate to pee in people's Wheaties, but we've been here before many times. Mainframes did the same thing. They were big honking machines that got divided up and rented out by the hour/job/unit to users. Supercomputers - same deal more power.

'Outsourcing' took anything not a core competency and allowed companies to lease back stuff, whether it was people, process, or other 'stuff'.

Managed services/leasing - pay for a computer and software (or several in a system) over time. Some were shared some were private.

Cloud - chop the system(s) up into components that can be paid for in a per unit fashion and users can lease the collective system or any component of it without a contract.

Tie it all together and it means companies no longer have to plan for capex, they can have an 'oh shit' go online, get a petabyte, couple hundred processors, and take the thought around problem solving out of the equation.

The downside is that IT is once again in a reactive vs. proactive mode of operation. They are willingly giving up any strategic value in the organization. The organizations are chasing the Cloud buzzwords like a junkie chasing a fix. Stop. Just stop.

With control comes responsibility, something that we seem so willing and driven to shirk to someone else vs. sitting down and calmly figuring out that:

A private cloud is nothing more than another way to pay for your systems.
A public cloud is nothing more than a pay as you go contract with a service provider
There are costs and benefits to each that should be explored thoroughly before deciding what to do.

And if you have an 'Oh Shit', what an opportunity for IT to step up and do something right. No buzzwords, no bullshit.

So as we see the vendor press releases about 'Cloud Strategy' lets remember that they are telling us what we already know - there are many ways to pay for your stuff, we need to figure out which solution is best for us as we have for the last 25 years thank you. Then we'll let you know how we want to pay for it. Kind of like our mobile phones...

Tuesday, May 11, 2010

Did EMC Management smoke breakfast this week?

That is a reasonable conclusion after reading what Joe Tucci said at the EMC World event in Boston - 'Journey into the Private Cloud' as in cloud of smoke?

I understand that every Fortune 2000 IT company feels compelled to have a 'cloud strategy'. It's the new Black. However when a strategy is presented in a way that it communicates alienation of some big technology partners (HP/Cisco) whose gear EMC's relies on in some form or fashion to move the data on and off EMC arrays, then I sit back and wonder WTF were they thinking?

Here is the quote I am using from ZDNet:

Tucci also criticized the data center verticalization strategy that companies such as Hewlett-Packard and Cisco are taking, saying it will lead to a new kind of lock-in that will ultimately lend itself to inefficiency. He said EMC’s private cloud strategy swaps out verticalization with virtualization and allows all of your data center solution providers — even EMC competitors — to plug in.


So here are the things that make me glad I am not a shareholder and glad I am a customer:

1. Leading to a new kind of lock in is what you want Joe. That means people like your stuff, they buy your stuff and become dependent on your stuff to function. It's a nice virtually guaranteed revenue stream.
2. As a customer, its great that I can now try other storage offerings from my competitors in a cloud, so when you decide to change the strategy to something more realistic and come back to get me to 'lock in', I now have options. Options that you gave to me, and your competitors. Thanks.

Look, EMC has said time and time again it wants to be an arms dealer. They could care less who wins on the battlefield so long as bullets keep flying and they don't get bloody. The issue with this approach is it's non-committal. What are you for? What are you doing vs. what are you not doing? What do you believe vs. how many buzzwords need to be thrown around before people see that you have no clue about what you're doing.

So Joe, if you are listening or should have one of your colleagues stumble across my blog - you don't need a cloud strategy. Cloud is an exercise in accounting. It's mainframe all over again. Take a box, itemize its components and lease it back to the user on a per unit basis. If you want a solid private cloud strategy, call me, and I will pull together leasing partners who will chop up a VPlex faster than a Benihana hibachi cook and lease them out, protecting your market cap, serving your customers, and still being an arms dealer.

In other words EMC needs a cloud strategy that isn't about cloud. It's about EMC's products being the best damn storage products they can make and enabling different ways for customers to figure that out.

Monday, May 3, 2010

Part Three of The Issue of Scale - So What?

The question 'So What' is probably my favorite question to ask when a great idea comes to mind, or I hear a pitch about something. If I haven't answered the 'so what' question in advance, then I have failed in specifying and conveying what matters. I have also failed my colleagues if I merely whine about an issue without coming up with at least one solution.

On the Issue of Scale, so what? Follow up question - what can be done?

The so what question comes down to supply and demand. The supply for large footprints is dwindling and will only get tighter. That drives price up in mature markets. Sure you can get a deal in Oklahoma City, but I would put money on paying more to be in Northern Virginia, even outside the Ashburn/Reston data center nucleus.

I had posted a response to a post by Lydia Leong at Gartner

She makes the argument:

If I’m going to believe in gigantic growth rates in colocation, I have to believe that one or more of the following things is true:

- IT stuff is growing very quickly, driving space and/or power needs
- Substantially more companies are choosing colo over building or leasing
- Prices are escalating rapidly
- Renewals will be at substantially higher prices than the original contracts

I don’t think, in the general case, that these things are true. (There are places where they can be true, such as with dot-com growth, specific markets where space is tight, and so on.) They’re sufficiently true to drive a colo growth rate that is substantially higher than the general “stuff that goes into data centers” growth rate, but not enough to drive the stratospheric growth rates that other analysts have been talking about.

My counterpoints were that Virtualization is driving power densities up, not down. Colocation pricing is based on power density, whether expressed in square feet or per Kw rent. Density goes up, so does cost whether the cost is new CRAH/CRAC units, a new building, or a 50MW transformer.

Second - more companies look to outsource anything that is not a core competency including data centers than look to build up in house capabilities and staff. The construction of data centers gets more buzz than leasing does, but leasing still outpaces building every quarter. The companies who build are single tenant facilities with cash to do it. However building one and running one are two different businesses.

Prices are increasing. 'Ready to go' supply (a.k.a finished space) is taken in most major metro markets. That said, there will always be a company that will buy some business, get a loss leader anchor tenant and those decisions have many variables like loan structures, financing vehicles, etc. that drive what a company will charge on a specific deal.

Renewals are being done at higher price points. The cost of capital to grow is very high, data centers are incredibly capital intensive up front and companies that are doing renewals are seeing bumps in rates going forward. The counter seems to be to shorten term as if to send a message to the company that 'if you're going to jack my rate then I am only doing a 12 month renewal'. You know what, the vendor likes that more because in 12 months there will be less space and higher rates.

So how does one deal with this?

Put ROFO's in your license agreements. It gives you the ability to preserve space if a facility is filling fast. It may cost you more if the new customer is willing to pay more, but it beats having to run two sets of infrastructure in two separate facilities.

Investors - open up your checkbooks. If you got burned in the last build up - here's your shot at redemption. The window will start to close in 18 months in that you want a project funded and rolling and cash flow positive in 18-24 months. If you're looking for a fast return - stay out of this arena.

Operators - there are many deals out there to roll up assets that are underperforming and make a reasonable play and expand with rent in place.

End users - take space now if its available - especially in hot markets. If you think that virtualization will save you, it wont - your power draws will go up, we have never created less data than the previous day/week/month/year. If you think I am wrong look at it this way - every increased megapixel of camera that comes out increases file size, things go viral, and what was one picture can become a dozen copies of a a 2 megabyte photo, in other words 2megs * 10 copies is 20 Mb, not 2.

Lastly, there is a reason that there are hot markets and data center clustering - it's network density. More carriers in a building means better performance, fewer hops and lower latency. Go to buildings with high carrier density.

Monday, April 12, 2010

The Issue of Scale - Part 2 - The Basketball through the Python

What a difference a week makes...

QTS announced they will bring up to 1M square feet into the market after they purchased a former chip fab plant in Richmond VA.

Digital Realty Trust announced they they are full as is DuPont Fabros in Northern VA and they will both be expanding to the tune of ~100K feet each. In total that is roughly 120 MW, I am not sure on the time period over which the inventory will be rolled out.

So given that coming into the year, about 9Mw were taken per quarter in Northern VA, and the recent leasing of at least that much in March means that things are growing not abating. That does not include the ~1200 MW that the Government will need to start planning for consolidation starting in 2011.

So this example brings me to the next issue of scale which is reality of expectations.

Specifically expectations that the available space will be there when needed. If the Government or the Systems Integrators think that 1200MW will just appear, then lets look at reality.

Lets say 12 agencies need 100MW each. With 10 -20MW available in any given quarter as move in condition - i.e. we tour it, we like it, we want it, price is fair, where do I sign, space is finished condition - this means in ten years the space will be there. Ten years to do a consolidation that has been set to take two years. For each agency.

At a cost of $1200/ft, and 10k ft = 1.2 MW and 1200 MW are needed/100 per agency... For 100 MW the capital needed is $1B. Times 12 = $12B. For ~1M sq. ft.

Investors are you listening? That is $12,000,000,000 that is needed to fund building data centers that are green, efficient, and presumably modular in design and deployment. For one project from one customer. In a two year time frame. I mentioned that the power had to be green right? Oh and did I mention that gear needs to be ordered to replace the old stuff, moved if it's still good and in working condition, and all brought together within specific windows to minimize downtime? People can't NOT get their Social Security checks.

Using an industry estimate of 4 months to complete a 10k ft computer room - 1,000,000 square feet/10,000*4 months/12 = 33 years if we start now. No, that does not factor in economies of scale/parallel building projects, etc.

If we deploy 1M square feet at once then maybe to get a shell up takes 2 years, but figuring out how to chunk it up is a different story, and also where the time gets eaten up.

How much of the space needs to be in a SCIF and/or adhere to DCID 6-9? Where is the failover site going to be to make things redundant? Is all the data and equipment at the same level of security? Should it be? How do we connect it all?

I can hear the Cloud pundits say - put it in 'the cloud!'

How many of them have 12MW of cloud deployed let alone 1200? Does the data stay in the US? Can they prove it? Are all the facilities secure? How secure? Why do I have to pay to move data from site to site? How many vendors, license agreements, leases, and operations people, technologies, and methodologies do we need to be aware of?

Some of the apps can move to the cloud, along with the associated data and some of it already has been, however I am willing to bet that it is less than 1%. So this is a big opportunity and I sincerely hope that we as an industry can help keep it simple, eat this virtual elephant in bites, and save money for the taxpayers.

Monday, April 5, 2010

The Issue of Scale...

I have been quietly spending the last few months contemplating scale. Specifically scale of data centers and the impacts of virtualization to the delivery of scale. This is the first installment of a multipart series discussing what is and what is not out there in the data center realm.

There have been few announcements about large data center projects worldwide over the past two years. Mostly from Single Tenant Facilities (STF) like Apple, Google, Microsoft, and Facebook. I chalk this up to the capital markets being as dry as Melba Toast on a summer day in Vegas, so there hasn't been any capital to tap into to build new facilities so the big companies with credit are about the only ones who can do a build.

This leaves the rest of the market with Multi Tenant Facilities (MTF) strapped for space in the X megawatts up to the XXX Megawatts scale. The MTF markets are already experiencing stress in that the prime Tier III space is being gobbled up rapidly. I have had several discussions with folks outside the data center industry, or at best peripherally involved in it (like they shop for new space once every 8-10 years) and they say 'I don't see what everyone is talking about, there seems to be plenty of space out there'. For 10 cabinets or 20 cabinets I'm sure there is. For multiple megawatts from a single provider in a specific market? Ha ha ha ha... That's a good one.

In Northern Virginia, dozens of megawatts move every quarter, and have, for the past year. Companies are reluctant to publicize it. Why? They don't want the remaining inventory sucked up and for their ROFO/ROFR to get triggered because someone else needs a big chunk of space. There are two companies with buildings in Northern VA today that sat mostly vacant a year ago and one has sold out and the other will be full by fall. Another company on the West Coast that has pre-leased the entire first phase of their project before it was completed. Now that market has a few pockets for a few dozen cabinets when it seemed like there would be plenty of inventory for 2010-2011.

At the Data Center Dynamics conference in New York a month ago, Jim Kerrigan, one of the speakers said that "32% of the leased data center space is up for renewal by 2013" which means even MORE stress on the market is on the horizon with very little building going on and customers expanding out of necessity if nothing else.

Hopefully this helps establish a baseline for people trying wrap their heads around the two sides of the discussion about whether or not the Data Center market is tight on inventory. If you need a few cabinets in a few markets - you're fine. If you need 10 MW in 5 markets... Not so much...

Monday, March 22, 2010

I am Headed to FOSE

Are you going to the FOSE Conference in Washington DC March 23rd-25th?

I will see you there. I will be attending a few sessions, meeting with friends and colleagues, and if you want to catch up, talk about the State of the Union, let me know - mark.macauley AT gmail. com

Monday, March 1, 2010

I will be at Data Center Dynamics in NYC 3/2-3/3

It's at the Hilton in midtown. Is anyone else going?

Wednesday, February 24, 2010

Data Protection (Movement) in the Cloud

This gives us a visual representation (Heat Map) of cloud data privacy policies. This is VITAL to understand if you do any work with the Government, here's why:

The data cannot leave the United States, and if you use Amazon or other cloud offerings from global players there is not a way to insure that data and workload stays in the US.

So while your Cloud Sales rep may be local, look under the sheets and see where the data is and could be at any given point.

Thursday, February 11, 2010

Cisco Overlay Transport Virtualization (OTV) question

Does anyone know what the failover capabilities are with OTV?

Spefically I want to know how long it takes for workload to be moved and connections restored. Example:

Data Center #1 has a power issue, site switches to battery and gives me 8 minutes. Can I have another site (DC #2) up/failed over within the 8 minutes with no data loss provided that the systems are configured correctly?

Tuesday, February 9, 2010

More details on Cisco OTV

http://jasonnash.wordpress.com/2010/02/09/cisco-announces-otv-the-private-cloud-just-got-more-fun/

I was going to get into another layer and Jason did. I lifted this from his site (above):

What OTV does is that it allows you to connect two L2 domains that are separated by a L3 network. Basically, it’ll encapsulate Layer 2 traffic inside an IP packet and ship it across the network to be let loose on the other side. In this way you can make two logically separated data centers function as one large data center. The beauty of OTV is that it does away with a lot of the overly complicated methods we previously used for this sort of thing. It’s really, really simple. The only catch is that you need Nexus 7000s to do it today. How simple is it? Here is all the configuration you need on one switch in your OTV mesh:

otv advertise-vlan 100-150
otv external-interface Ethernet1/1
interface Overlay0
description otv-demo
otv site-vlan 100
otv group-address 239.1.1.1 data-group-range 232.192.1.2/32

That’s six lines, including a description line. Basically, you enable OTV and assign an external interface. The switch, like all good little switches, keeps a MAC table for switching frames but for those MACs on the other side of the L3 network it just keeps a pointer to the IP of the far end switch instead of an interface. It knows that when a frame destined for a MAC address on another switch arrives to encapsulate it in to an IP packet and forward it out. The switches all talk to each other and exchange MAC information so they know who is where. This communication of MAC information is handled via a multicast address. Very simple, very elegant. All done without the headaches of other tunneling or VPN technologies.

Monday, February 8, 2010

Cisco's OTV - Overlay Transport Virtualization

I was taking a look at Cisco's OTV capability they are rolling out on the Nexus gear and my first impression was - wow!

The gist of it is: OTV is a new feature of the Nexus OS operating system that encapsulates Layer 2 Ethernet traffic within IP packets, allowing Ethernet traffic from a local area network (LAN) to be tunneled over an IP network to create a “logical data center” spanning several data centers in different locations. OTV technology will be supported in Cisco’s Nexus 7000 in April 2010, and existing Nexus customers can deploy OTV through a software upgrade.


Cisco says its overlay approach makes OTV easier to implement than using a dark fiber route or MultiProtocol Label Switching (MPLS) over IP to move workloads between facilities.

“Moving workloads between data centers has typically involved complex and time-consuming network design and configurations,” said Ben Matheson, senior director, global partner marketing, VMware. “VMware VMotion can now leverage Cisco OTV to easily and cost-effectively move data center workloads across long distances, providing customers with resource flexibility and workload portability that span across geographically dispersed data centers.

“This represents a significant advancement for virtualized environments by simplifying and accelerating long-distance workload migrations,” Matheson added.

My opinion on why this is important centers around Layer 2. Layer 2 is where peering happens. Peering allows companies to move data around without paying for it. Bandwidth providers agree to pass traffic from one network to another via a cross connection between the two networks. Instead of buying a point to point OC-192 between data centers, a company would colocate IT gear in a data center on a peering point, buy a port on a peering Exchange like Any2, and cross connect to other networks who can move traffic around on other networks at Gig and 10 Gig (and up) speeds. The connections are in Layer 2.

A pertinent example would go like this:

An online gaming company colocates 50 cabinets in One Wilshire or 900 N. Alameda. They buy a port on the Any2 Exchange and set up 5 cross connections to different networks who peer at One Wilshire and 900 N. Alameda - Level 3, AboveNet, Tata, Vodaphone, and NTT as an example. As they expand into their global footprint, they can move VM's - workloads and game play - around from one data center to another using the cross connects, and not have to have large pipes, point to point, from one facility to another.

Another example would be a US based space agency I have done some work with, has containers that house a cloud offering (OS) in them. One of their satellites takes 100 pictures of the rigs of Saturn one morning and needs to distribute those massive images to thousands of constituents worldwide. In the past they may have purchased multiple 10 Gig pipes from their Center to a handful of hubs they interconnect with. Big money for big bandwidth which they need. Using this OTV technology, they buy a fat pipe from their Center to 55 S. Market in San Jose (3 miles or so), buy a port on an exchange that peers there, and now they can move those photos, videos, etc. to their other hubs who use the same peering exchange and not have to pay for the bandwidth between 55 S. Market in San Jose, CA and say Chicago - 2167 miles. This pays for the deployment of other containers where there is 100% green power that is cheap, can use peering to expand the footprint's network, and if a better spot becomes available, they move the container after offloading the workload to another container or two.

This is one of those game changing technologies in how people can deliver compute to customers. For large scale deployments, especially those that must be green and use wind generated power or nuclear generated power, this is a huge advantage. You know have the ability to drive physical and virtual movement of workload based on criteria other than - is there (do we have) a data center there.

I will be watching this solution closely.

Wednesday, February 3, 2010

The Green IT shell game

I was doing research for an RFP yesterday about carbon footprints as they related to PUE, Data Center Operations, and IT resources in general. What I realized was that IT still looks at itself in silos, not systems. Let me explain...

This morning I got my copy of eWeek and on the cover was a pointer to Green IT Solutions - The Real Deal - on page 16. I'll admit I didnt spend a lot of time on the article because it was just like 100 others I have read the past few months that follow a simple formula:

Virtualization = Green

Um, not exactly...

Let me take you through a systems view of carbon in the data center operations:

The PUE of most data centers is 2.0 or higher. This means that for every dollar spent on powering servers, an additional dollar is spent on common facilities electricity to support systems. In the container model. The PUE is 1.2 which means 80 cents of every dollar captured for common facilities/infrastructure is saved. For a 500 KW deployment with a $0.10 per Kwh charge it means that in a data center with a PUE of 2.0, the cost is $73,000 for electricity. With a PUE of 1.2 the cost is $43,800. Savings per month of $29,200, and all of the electricity is green.

To produce a Kw of electricity from coal, 2.3lbs of carbon are produced (see http://cdiac.ornl.gov/pns/faq.html), so for a 500Kwh environment, multiplied by 2.3 lbs per Kwh is 839,500 pounds or 419 tons per month. Since wind eliminates 98% of carbon emissions, the net carbon footprint per month drops to 8.4 tons per month (see http://www.parliament.uk/documents/upload/postpn268.pdf)

When I look at a virtualization = Green example I see some major gotchas:

1. New equipment will likely need to be purchased. The manufacturing process is not carbon-light.

2. This means more mercury and other nasty stuff in the new equipment plus the old equipment. recycling gets you a couple of brownie points.

3. If a new data center is constructed, or leased, or expanded there is the cost of manufacturing, transporting, and assembling all of the components. If it's still powered by coal - you gained nothing

My point being Virtualization is NOT/KINDA Green and has a measureable but not significant reduction in carbon footprint. My personal stance is that you are far better off getting your utility to get wind energy into your data center and cut carbon by the boatload. Better yet, get wind produced power, use an existing data center someplace cool and open the windows when you can.

The other thing I found very amusing in my research was the data center with a LEED Platinum rating and was powered by multiple 50+ year old coal plants. It's like tinting mercury pink to make it 'safe' isn't it?