Lawyer Bait

The views expressed herein solely represent the author’s personal views and opinions and not of anyone else - person or organization.

Tuesday, December 13, 2011

PUE, DCiE, and other stuff that doesn't matter much...

The Gartner conference was one of their best attended events in 2011 proving yet again that the space is hot, appears to have a bright future, and like the other hot sectors before it, is trying to mature through the development of standardized measurements that will be embraced by vendors, users, and analysts and give owner-operators bragging rights and a new area to compete in. PUE, DCiE and other yardsticks are competing for the 'best' yardstick for efficiency.

Sorry to pee in your Wheaties folks - we already have one. Cash.

I have done enough deals to know that PUE matters before financial analysts get involved. Once they do it comes down to costs. Total costs. Lowest cost wins. Not every time, but show me a CFO who authorizes paying more for something than its worth as a rule.

I will also put a caveat in to say that this is more true of multi tenant facilities than single tenant facilities. Single tenant/purpose built data centers can do what they want so long as it makes financial sense and delivers on the business objectives for the company and THAT company only. Multi tenant facilities must be far more INCLUSIVE of a wider array of requirements to service the greatest number of clients and their desires.

Both Single and multi tenant data centers will have a mix of manufacturers, densities, layouts, loads and preferences. The multi tenant facility needs to factor in broad compliance, placement of densities, weight, and other highly variable nuances. Like Rick on Pawn Stars says - 'You never know what is going to walk through that door.'

Thursday, December 8, 2011

Gartner Data Center Conference brain dump...

I just got back from 3 1/2 days in Las Vegas at the Gartner Data Center Conference, and will be writing about many of the things they presented, only in a lot more detail. In fact that was the #1 piece of feedback I heard was that the presentations are more sizzle than steak, and that they have to do a WHOLE lot more in providing the details about what they present.

Monday's keynote was pretty good and was given by Dave Cappuccio. It covered many stats and numbers that were fun to listen to, and I thought it was a decent way to ease into a conference. Some of the interesting nuggets I heard:

30,000,000,000 (billion) pieces of content were added to Facebook in the past month
Worldwide IP traffic will quadruple by 2015 - care to guess why?

Data Centers consume 100 times more electricity than the offices they support
The percentage of a college student's texting to phone call ratio is 98.4% to 1.6%
Within 4 years the bandwidth I/O per rack will increase 25x

Needless to say this is all pointing to growth - whether a company is prepared for it or not. There are some other topics I will be jumping into over the next few weeks as Gartner left a lot to be desired and discussed and in some cases were pretty off. So Stay tuned and lets see what I come up with once I shake the red eye jet lag...

Wednesday, November 9, 2011

Is innovation exclusively for single tenant datacenters?

I am a data center geek. I love to read about them, see who is doing what, learn about new stuff and approaches that are being implemented - all of it. What I have begun to notice is that the innovation that is happening - while totally awesome - will never fly in 95 out of 100 facilities in the world. I have begun to ask the question 'why' to other geeks and data center users.

Without question the number one response is - because when you run a single tenant facility you can take risks and do things that mitigate a common set of risks - and since one company pays, they can do what makes sense for their business alone.

Others include - 'because they have an innovation budget'; 'it's about controlling their destiny'; 'the penalties for an outage are not as severe as outside the single tenant walls'.

In my opinion I think it has to do with a combination of factors - most of which have been mentioned above with one exception - evolution. As companies move from a cabinet in the ghetto colo company, they evolve to require more choices and options, different configurations, and vendors who deal with companies the size they hope to become. Once they scale up to ~10MW (100,000 square feet) they evolve into looking at finish to suits and finally building their own. Why do I think this?

I saw it happen with Facebook. They started by leasing cabinets in a few key markets, scaling that, then taking cages, then taking suites, then floors of buildings and now are building their own. Their headcount for their data center operation grew to that of a mid size provider, and there does come a point where it is more cost effective to build your own than to have to keep moving like a shark looking for open suites, floors, or whitespace.

I have seen it happen in reverse with many banks too. They built their own and leased where they wanted, acquired with abandon, and then had an impressive footprint. Then they figured out - as many did - that the network was more important than real estate and so the consolidation began where they wanted to consolidate into fewer huge footprint super centers because it was more effective to run a few huge facilities vs. dozens of all different sizes.

So when I think about why does all of the innovation appear to happen in the huge facilities, I look at the evolution of Apple, Facebook, Google, ING, Citibank, Morgan Stanley, etc. and they all find a different path to the same location - a risk averse facility that makes economical sense. Innovation is tied to reducing risk or cost and when you are the only one who has to live with the decisions and their outcome, single tenant facilities will continue to out-innovate the multi-tenant facilities since they have an exponentially larger set of risks and requirements to service.

What do you think?

Thursday, September 8, 2011

Google's Data Centers use 26 Megawatts - What's the fuss?

In the past 24 hours there has been a ton of press about Google's announcement that they use 26 million watts of electricity. Put another way, that is 26 Megawatts, and put another way, what is all the fuss about over 26 megwatts?

Is it that they ONLY use 26 Megwatts?
It it that Google is so efficient that they run this gigantic brand on such a small amount of electricity?

That has to be it, because folks, 26 megawatts in the data center business is a medium size operation.

Just for comparison's sake, Vantage data centers is building a campus twice that size in Santa Clara. Digital Realty Trust has a project in Dallas that is 4 TIMES that size to service clients. Bytegrid Holdings LLC has a 9.6 megawatt facility about one third the size of Google's entire operation. Just sayin'.

What I also have yet to review fully is the carbon footprint. It was mentioned in another, better,  article that they get 25% of their electricity from renewable sources. Not LOW CARBON sources, but renewable sources. The thing I have paid attention to for years is what I call the carbon chain - what is the carbon footprint from start to finish on a project, and then the impact on an ongoing operational basis. And I'm sorry buying a carbon credit doesn't count. That's like buying recycled paper with Monopoly money.

It mentions that Google builds their own facilities. What about the carbon it takes to manufacture the steel, the concrete, the copper, and other raw materials used in a facility? Then there is the transportation. How many trucks filled with diesel are trucking in heavy loads of new equipment that was produced from scratch and had to source the raw materials, and you get the picture.

Why not re-use a facility as a rule vs build new? Why not cap the radius of transport of newly manufactured goods to 100 miles or less? Why not look at the whole impact and subsequent carbon footprint and measure and improve that? Just sayin'...

Still, great numbers by Google.

And a side note to Noah Horowitz at the Natural Resources Defense Council who  '...cautioned that despite the advent of increasingly powerful and energy-efficient computing tools, electricity use at data centers was still rising, as every major corporation now relied on them. He said the figures did not include the electricity drawn by the personal computers, tablets and iPhones that use information from Google’s data centers.

“When we hit the Google search button,” Mr. Horowitz said, “it’s not for free.”

Trying to link energy usage of other corporations, personal computers, and iPhones to Google's numbers is like blaming a person's alcoholism on the fermentation process. It's a stretch at best. And what are you going to do to stop this reckless usage of the most efficient data center footprint on the planet, stop Googling? Didn't think so. Me neither.

Nothings free, but this is data that shows if we are going to search and want to be green about it, Go Green, Go Google.

Thursday, August 25, 2011

Hybrid Cloud = Same Sh*t different Offering?

I have had a number of conversations the past two weeks about Cloud computing and what companies are asking for and what they are actually buying. My observation is this:

Companies want providers to offer public and private cloud elasticity, but only buy hybrid.

This is following another form of pretzel logic (yes I like Steely Dan. A lot.) which are the folks who say they need to be outside of THE blast zone. (What zone? For which type of blast?)

First one first - duh. Was there ever ANY other choice than they hybrid cloud? Not all data is meant for public consumption in spite of compliance rules trying to impose transparency (visibility) and so the notion of a public cloud was right out of Woodstock, man. The public cloud was exciting because it took the pressure off of IT departments to have to plan way ahead of stuff. If they needed capacity, they went out and got it someplace that had it and didn't have to go to a CFO and ask for $500K of gear to support the new marketing initiative or cover their *ss when they underestimated the website traffic when Justin Bieber played a benefit concert for Hurricane Irene survivors. It solved a lot of unmaterialized logistics and scoping problems and was branded as being insecure and we couldn't have that in the age of WikiLeaks and Anonymous hacking could we...

Then the pendulum of cloud swung back to private cloud. Well by my assessment private cloud was just another way to get billed for computer stuff by the hour. Private Cloud computing was becoming the No-tell Motel option and charging by the hour. If you wanted something quick and dirty and the movie titles not printed on the receipt then you could get your resources by the hour. In this case all the data and network and everything about the environment was kept out of plain sight and secured.

Then it's as if the blinking light went steady on us - we can have both things - the ability to not have to know in advance what we will need for resources AND the ability to keep sensitive stuff more secure.

Is it just me or isnt this what managed colocation and managed services have been doing for the past 20+ years?





Thursday, August 18, 2011

Why are containerized modular data centers the new Black now?

What took you so long?

I was at Data Center Dynamics in Wasington DC on Tuesday and you may have thought it was Modular Dynamics instead. It seemed as if the data center world finally came around to what I have been seeing, studying, deploying, and writing about since 2008. That's not to infer that the modular solutions have gone mainstream yet, but the conceptualization has matured a lot as expected.

HP had a scale model of the EcoPod which was cool to see since up to that point I had only seen Powerpoint slides about it and have yet to see the real thing. PDI was showing off their modular solution as well which will be watched closely as they have some patent infringement allegations they are fending off. ActivePower had their PowerHouse scale model on display as well and given the coziness between HP and ActivePower in recently announced deployments I could easily see them sharing a booth at future trade events.

Bytegrid was the only data center owner operator who was able to speak to containers at the event - and did openly talk to several people about the good the bad and the stuff that trips you up on deployment and had specifics to back up the discussions. The three other owner operators there - GigaPark, QTS, and Powerloft weren't talking about supporting modular solutions.

Based on what I saw, acceptance has increased dramatically, demand is rising right along with it, and the Achilles heel that was there three years ago is there today still - being able to answer 'So where can I put one of these things?' with something other than 'Wherever you want'. ByteGrid is on to something and will be watched.


Tuesday, July 26, 2011

Rethinking the Value (and cost) of data

I have had a number of conversations recently with some hospitals about storage requirements, and specifically where to put more gear as their data centers are bursting at the seams, or in some cases becoming structurally weakened because of floor loads. What I figured out pretty quickly is that there is an issue that is fairly ubiquitous to these organizations, but an issue that I believe spills over into any firm who stores data - all data is treated the same and is costing firms millions each year to treat all their data the same.

Here is one example:

A hospital is looking for data center space to grow their storage footprint into as they have tapped out every watt of space they have in the hospital. That's not even the real issue. The real issue is that because they have so many racks of gear, and heavy storage arrays on floors of the hospital with floor loads insufficient to support the weight, that the hospital is experiencing structural issues. Floors are sagging under the weight of all of the data being stored. Literally.

Many vendors they have talked to are simply telling them that's unfortunate but they need to buy more storage. However, the vendor won't get the sale unless his uncle is in the construction business and can structurally retrofit higher floor loads before the new arrays show up. So what does the hospital do?

Hospitals see patients, maintain facilities, manage compliance conformity, and administer services. Their IT guys set up storage arrays, networking equipment, and manage the applications that run the business. They are not in construction and they are not data center experts and yet are the go to guys to figure all of this out.
The solution that we began to discuss - and that I wanted to share - is that there is a fundamental change that MUST take place in how they think about data - storing it, managing it, accessing it, and realizing it's NOT all the same. The other thing that had to happen was to look at the way their business operates (no pun intended) and structure things in a way that are based on how their business runs, and not how their data flows today, but how it needs to flow based on their business.

Here is a specific example -

New patients create alot of new records. New paperwork, new insurance information, new MRI's, new test results, new billing codes, new invoices, etc.

Existing patients have this data on file and their data footprint changes with an office visit, an MRI, a perscription, and all of the associated charting that needs to happen to document the recent activity and results.

What are the commonalities of the two different kinds of patients? They create data around events - visits, procedures, tests. What are events tied to? A date. Can the date field be used to assess the freshness of data, and another date identifying the next event indicate the likelihood of accessing that data? Hmmmm. Interesting thought stream just started.

Where we got to was that whether or not the patient was new or existing, their events drove the need to access data. A new patient would likely need to be seen again shortly and would need to have their records accessible whenever. An existing patient who came in to have a sudden sports injury looked at would probably need to have their data file accessible since there would be consults, MRI's/X-rays, referrals, etc. taking place rather immediately.

A patient who was on a 'check in once a year' cycle, doesn't need their data accessed 363 days a year on average. They come in, get looked at, maybe a test is required (but we know what it is ahead of time) so why is the 'active' patient's data sitting with a 'maintenance' patient's data in the same facility, withe the same cost, and with the same SLA (service level agreement) or DAA - data access agreement? Why would you pay for 363 days of something you don't need? I equate it to paying for a rental car in Seattle for 365 days, while I am only in Seattle two days a year - why would I do that?

The point here is that yes, they do need more storage. They can't put it where they usually do - in their data center - and need to find someplace else to expand to. But before they just go out and buy new arrays, new cabinets, new servers, and lease new space, they MUST look at things in a new way so that not only are they solving the immediate issue -'we need more storage' they are solving the fundamental issue - all data is NOT the same and does not need to be managed the same, or cost the same.

Thoughts?

Wednesday, June 15, 2011

ByteGrid Launched


I thought it was high time I got around to announcing the launch of my company - ByteGrid. As the name implies it is the fusion of data (Byte) and Electricity & Telecom infrastructure (Grid) and since we are a data center company, fitting as a short & sweet description.

I few friends have encouraged my to blog about the whole experience and now that we're a real company, I find I have a lot less time to tell the whole story but a few things are important for others to know and were extremely important to me personally to know:

1. Building a company is difficult.
2. Starting a company is more difficult
3. Standing on the other side of the Starting line, with the Finish line out ahead of you is incredibly rewarding and puts the difficulty in perspective
4. Many friends and family will be supportive the first 30 days after the decision to go out and do something on your own, and downright mean and skeptical from then on. They are more scared than you are.
5. Do not EVER let someone talk you out of what you know and believe to be the right way to do something, especially when you hear 'If you changed _________ you would get funded faster...' or anything that dilutes your vision. The vision is yours, not theirs and only you know the right way to execute the vision. A lot of people have money. few have a vision, and even fewer the intestinal fortitude to stay true to their vision.
6. Once you take on other people's money, they have a big say in how things get done. And they should.
7. If you are not used to constant change, competing demands, and like things nice and orderly, you won't like what you're doing. The best laid battle plans change the instant the first shot is fired.
8. Be intimidated by no one. You did something that few people ever do, and even fewer stick with for any length of time, and if they don't understand that, they won't, and keep moving.
9. You won't do it alone. After I brought on partners, it was amazing how quickly things came together and the right compliment of skillsets balanced one another. In our case my partners added deep financial expertise, deeper operational expertise, and legal expertise to my sales talents, and we were a well oiled machine and delegate better than any team I have worked with and for because we know who is best equipped to handle a situation in spite of our egos.
10. If you start a company with the sole reason to make a shit ton of money, then almost every decision you make will be short sighted to that end. If you go into business for yourself because you like it, you will do something better than anything you know of, and is a natural extension of who you are, then the shit ton of money will follow and your decisions will be sound, thought out, and you decide what the right price for your efforts are, not a spreadsheet.

Maybe this helps some of you get off the dime to do something, or keeps others from doing something they are not prepared to do. Either way, it's my experience, my opinions, and the next chapter has yet to be written.

Check out the ByteGrid website, and take a look at our first data center we acquired. It's a Tier IV gem, and I will blog next about why we bought it, and why it is a fantastic facility. I will of course be biased, however, I backed up how solid it is with a lot of money-so I put my money where my mouth is too.

If you want a copy of my data center site selection guide - it's still available. mmacauley at bytegrid. com

Wednesday, April 13, 2011

The Evolution of Computing - Have we stopped dragging our knuckles?

The innovation of intangible things dependent upon tangible things has been the model for computing as a whole, practically since the Univac 1 hit the scene. Software - from word processing, to spreadsheets, to printer drivers, to web browsers, to java apps, and virtual machines - all intangible intellectual property innovation that was dependent on tangible things to be created, operate, and add value too. Key boards, green screens, motherboards, video cards, speakers, you get the idea.

On thing that I have been spending a great deal of time thinking about is which side of the equation - the intangible or tangible - will be cannibalized first?

We have seen the convergence most noticably in electronics. remember when you had to have a turntable, casette (reel to reel or 8-track), amplifier, and radio tuner to have a 'stereo' system? Then it was CD's and now the CD to mp3 (or other digital format) to just 'music' is happening. The battleground there is on the format which is a pissing contest that no one wins in my opinion. Why? Because there is someone who can think more creatively than the developer whose creativity was used up to create the format to make an interpreter/conversion intangible to make it all work.

The intangible side of the equation has seen it share of cannibalization (aka M&A) where entire companies who created a software program or operating system are now simply features in a larger 'suite'. Think widgets, apps, and all of the other components that get stitched together to make a new intangible thing in the 'cloud' as an example.

So if I think about all of the noise, innovation, branding, positioning, and other terms used to descibe the marketplace, two groups of tangibles are the veneer of all the intangibles - the source point and the end point.

In essence I am starting to believe that the endpoint (mobile device, tv, tablet, other) and the sourcepoint (data center) will be the only things that matter in the market. The UI will be embedded in the endpoint, the sourcepoint is all zeroes and ones to get from the sourcepoint to endpoint. People (consumers/viwers) could care less about the OS, the application used under the covers, or anything else intangible. I think Apple gets this (I am NOT an Apple fan at all BTW) better than most, however the sands are shifting and Apple is losing market share to Google daily. While Apple and Google have both built endpoints and massive scale data centers, Apple continues to offer a Blackbox of an OS, and other apps, and alienate Adobe and other platforms. Microsoft has tried this for years - and they don't own the endpoint and have some data centers for their Live apps but nothing as established as what Apple and Google have. And Microsoft loses market share every day as well - even on the desktop.

So as the next step in the evolutionary process I can't help but think that things will converge - and in the process the perception of what is vital will change right along with it and distill down to data centers (sourcepoints) and devices (endpoints).

Content will be the fuel for the convergence - and the innovations in the presentation of content will drive evolution at the endpoint. Because of this, the sourcepoints will be more reliant upon network density for cost effective distribution, with latency being the key variable. We are already seeing this with Facebook and Apple leasing and building sourcepoints, and with recent announcements by Dell to build 10 new sourcepoints, I will say that businesses who can own both the sourcepoint and the endpoit are making investments in owning both ends of the content consumption chain.

Your thoughts?

Tuesday, March 15, 2011

The Seismic Shift of Data Center Planning

Friends of mine know that I have been beating on a drum for over 15 years about proper site selection for data centers. In fact I have recently blogged about it again and with all that is happening in Japan, it got me thinking even more about it. The other thing I have also spent more time thinking about is disaster recovery for obvious reasons.

I will say it again - site selection is the most important feature of a data center. The 9.0 earthquake merely reinforces that in a big way. I would not want to be a data center owner/operator in silicon valley - 10 years ago or now. The two biggest reasons are:

1. Seismic activity
2. Logistics post disaster

The first reason is more obvious today beacuse of the Japan earthquake. While well designed in and of themselves - they were not designed to withstand a 9.0. Logical oversight in my opinion. If the worst earthquake to date was 6, designing one for 2000 times worse seems logical. Until it happens.

The second reason is even more impactful when we we look at not only the immediate aftermath of the earthquake but of the tsunami that hit right afterwards. I have seen about 30 minutes of videos and my layman observation is it is worse than a flood because the earthquake loosens everything and gives it mobility and when you add a wall of water to move debris around - and by debris I mean cars, buildings, and dumpsters - it clogs streets with several feet of debris to have to move to make pathways passable again. This means that people cannot get to or leave the data center, and fuel deliveries for generators are hampered, service vehicles cannot get to them and in some cases entry ways and exits are blocked at the data center itself.

So what are some solutions?

Take site selection seriously. Look at your disaster recovery plans - not which data center your customer data will fail over to - but the logistic issues you will be faced with in the event of a real disaster happening. Is there food and water for employees? How far away is the fuel for the generators? Are there multiple ways to get it there? What happens if public transit cannot operate? Can you bike or canoe your way there if necessary? What about the conduits carrying electricity and telecom to the site? Can they withstand a ground shift of 20 feet (Japan is 13 feet closer to the US after the earthquake). How far away are the power plants? What is their capability to provide service? How will they do this? What is the wind direction? If you are near railroad tracks, what is carried on them that is toxic or can close pathways to and from the facility? Who are your neighbors? Is there anything that can float or blow into your facility? Into the infrastructure yard?

Things to avoid - don't base a decision on how quick you can get in and out for a site tour. Don't think about how easy it is to get to and from the office/data center, look around the facility at the roads, the access points, railroad tracks (especially active ones), flight paths (remember 9/11 when they grounded all planes and sent those in the air to the nearest airport?), and don't think something bad won't happen because it never has.

bytegrid@gmail.com is how to reach me.

Ask the right questions and invest

Tuesday, February 1, 2011

Green Dtata Center Retrofit Financing Available - Game changer?

I have been spending some time, ok a lot of time, looking at why companies are not taking advantage of the various rebates and incentives for making data centers more energy efficient. There are so many programs offered by utilities, state and local Governments, and the green energy marketplace I wanted to figure out why more projects weren't getting off the ground. What I found can be summed up in a word - money.

Without exception, access to financing was the number one reason projects never got started. Companies would engage utilities, energy audit firms, constuction firms and other professionals to examine their current energy usage so that they can establish a baseline of what their energy usage profile is today. Many of these services are provided free of charge, and some firms charge for their services.

Once the basline was established through a thorough energy audit, a project plan was drafted detailing the equipment that could be installed to harvest energy savings. In most cases I found that the savings were substatial - 30-50% on average. In real numbers, if a data center operator spends $200,000 per month on their electricity, they would pay $100,000-140,000 per month with the new equipment, saving upwards of $1M per year. This is a no brainer, right?

Unfortunately it is not a no brainer and here is why: The new equipment to harvest the savings is expensive and goes on balance sheet. so if a company has an extra million dollars or two to spend, will it be spent on a data center upgrade or retrofit or on expanding the business? Most companies opt for not spending the capex to harvest the savings or only spend money on very short term ROI projects and only take a fraction of the savings back.

So once I figured out that the issue wasn't a desire to do these things (who wants to be known as the company that doesn't support green initiatives), or that they didn't know where to start or what to do - after all they had the audit and project plan in their hands - I focused my attention on solving the problem - where can I find or put together a financing solution that solves the problems?

I would love to tell you that I was smart enough to know exactly who to call to solve the problem  - but as many of you know me, know that I am not that smart, but I am lucky in the sense that I am not afraid to pick up the phone and ask a lot of questions and learn what I don't know. So I learned a lot and then got lucky when a colleague in the financial industry called me and said 'I think I have something'.

Something indeed.

We have put together a financing program for companies who want to go green, harvest energy savings, and eliminate the capex up to 100% for the equipment., installation, and maintenance services. The beauty of the program is that the savings are dovetailed into the financing so the owner/operator is never out of pocket for the savings because if the savings are 30% per year and the cost of financing amounts to 15% per year, not only does the company get the benefit of the cost savings, they can structure it so that they add cushion to their monthly opex.

Of course if a company has money in their budget for capex purchases they can spend that allocation and finance the rest and do even better. So what's the catch?

At a high level here are the conditions that must be met:

1. Project sizes need to be in the $5M-100M range, based on an energy audit that has been done (or will be done)
2. The Owner operator must have investment grade credit (BBB or better)
3. The owner operator must be willing to sign a new maintenance agreeement on the equipment so the lender knows the new equipment will be taken care of

The whole process - from the energy audit to project start is about six weeks - or less.

The program right now is geared towards the large data center users for obvious reasons - they use a lot of electricity, a lot of it inefficiently, and the greatest improvement can be made with these users. Rest assured that I am working on options for smaller companies with smaller footprints so stay tuned. The other benefit to this program is it is specific to the savings - any tax breaks or additional rebates are kept by the owner operator and not factored into the program.

If you want more information on this program please email Pelagic Advisors and they will reach out.

Monday, January 10, 2011

More on Site Selection - Where is the Holistic View?

I got a requirement sent to me on Friday afternoon for a site selection on a large requirement >50Mw. Based on what I know about users looking for sites in the marketplace I would bet an Egg Nog Latte that it was the requirement for the Social Security Administration's data center project that has gone sideways.

The Office of the Inspector General OIG issued a report back in April of 2010 about the criteria and process that put a smile on my face:

"While the reviewers concluded that the agency had developed a "highly sophisticated" list of site-selection criteria for the project expected to cost more than $800 million, they found the process for narrowing site properties down to a short list to have been problematic.

Mandatory selection criteria the agency developed could also have excluded too many locations.

"In particular, when developing the mandatory selection criteria, it does not appear that consideration was given to the serious fiscal impact that exclusions would have in the electrical power cost arena over the life cycle of the data center," summary of the inspector general's report read.
Information was also limited in evaluation of telecommunications criteria.

It was not clear from the report's summary whether one or several location had been identified. The agency did not release the document in its entirety.

Why is this humorous to me?

The things that were excluded - cost of power and telecom infrastructure were cited. Hello? The two most important criteria for a selection were overlooked? Wow. Who was doing the selecting? Was the 'highly sophisticated' criteria designed to mask breathtaking incompetence?

As a result the requirement (if it is the requirement) is back on the street for round 2. If you are involved with the site selection process for the SSA (or any other organization) I am happy to give you my site selection criteria checklist. It's simple, I have used it myself for years, and it works. Email me - bytegrid@gmail.com and I will send you a copy. Free. Yes free. I look at it as an investment of goodwill to keep my taxes down paying for a project that is big, expensive, and uses MY money to pay for it.

The other interesting thing to have noted in the requirement which I laughed out loud over was that Economic Incentives were weighted as high as power and telecom infrastructure in the selection criteria. Folks, Economic incentives go away after a few years. Data centers stay around for 15-30. Economic incentives help in the short term, but how many elected officials stay in office 15-30 years? That is a lot of election cycles and political agendas to factor in.

There is also the mandate from Vivek Kundra issued in February of 2010 that they will need to factor into their decision. This entails highly efficient, green powered facilities that will be flexible enough to handle the IT changes in virtualization, the inclusion of legacy applications and hardware, and CIO agendas. It's a tall order, however it's so damn simple when you focus on the lowest common denominators. The same ones they overlooked in the first requirement.

Wednesday, January 5, 2011

The Modular Data Center Choices - half baked?

Anyone who knows me or follows this blog at all knows that I have been a strong proponent of modular solutions for many years - two years in fact when I first blogged about container ROI on 1/8/09 using the IRS as the example. Since then many solutions have been designed, some deployed, and all of them rarely discussed. Still scratching my head on that one - I would want to do business with a company that is delivering the most cost effective and carbon footprint lowering solutions available.

Today, there was an announcement over at Reuters about another Data Center Anywhere offering from Telehouse which we can add to Lee Technologies, i/o Data Centers, BladeRoom and others. I have looked at all of them, talked with their Management teams, reviewed deployment documents, seen costs, proposals, the whole nine yards on these 'solutions'. You know what they are  (data center anywhere solutions) half baked. If that.

The smart companies (Lee, BladeRoom, HP in my opinion) have figured out that while they may not be in the real estate development business they work with those who are.

Without exception each  DCA 'solution' has a 'Batteries not Included' clause. In other words - they are as awesome as a Ferrari with no fuel system. It absolutely STUNS me that in this day and age of partners, specialization of product and service, and the fact that the modern data center buiness is approaching its 30th birthday that no one has put ALL of the pieces together into a TRUE solution. A true solution is a cup of coffee, not the possibility of opening your own Starbucks.

Since I hate people who bitch and don't provide solutions so here is my two cents on a TRUE solution:

Modular manufacturers need to find and work with owner operators of data centers to find and make ready sites that are 'modular ready'. It's not like a modular solution isn't a data center with the same lowest common denominators - power & fiber.

I know  what you're thinking -

That data center companies are reluctant to embrace the modular solutions (they have inefficient buildings to fill)

I am a manufacturer not a real estate guy (you aren't selling many units without someplace with power & fiber to put them are you?)

The market is not mature (You haven't done your homework)

So what has happened is that customers are tasked with filling in these gaps themselves and the reason they are coming to you, the manufacturer, is to buy a data center SOLUTION, not a piece of one.

Most end customers are even less equipped to handle these filling of gaps than the people allegedly in the (modular) data center business and as a result the solutions havn't been flying off the assembly line. If you want a full solution - ping me - bytegrid@gmail.com - I have done my homework and will get you your cup of coffee...