Lawyer Bait

The views expressed herein solely represent the author’s personal views and opinions and not of anyone else - person or organization.

Monday, April 23, 2012

Continuous Uptime vs. any other number

I was asked last week what the continuous uptime was of my data center. I knew generally, but did some digging and some math and learned that it was 125,000 + hours of continuous uptime.

What really got me though was how little time and effort and coverage there was on the number. It got me thinking - with all of the coverage and articles and studies on efficiency, why the lack of focus on the only thing that matters when the shit hits the fan? Do you really think that in the middle of an outage that companies are freaking out over how awesome their PUE is? Me neither.

So I did a blog post  related to this which was the fact that we already have the ultimate yardstick when measuring our data center spend and the stuff that goes into it - cash. We can easily see if something is more or less than something else, and decide if we want to spend more or less, buy on value vs. price, etc.

So why is there so little time spent on what the uptime number is?

Well, if yours sucks, you want to be looking at other measurements that paint you in the most positive light, duh. My other hunch is that PUE (as a widely discussed example) is one of those nebulous numbers that is open to debate on how you measure it, and keeps the dust in the air so you don't see the only thing that matters - Uptime.

Why do I say it's the only thing that matters?

Because it is the bedrock on which SLA's are written. I was chairing a panel at a BisNow event in Virginia and I asked a question to the panel - What is the one thing you want to tell vendors in the audience today? - Mike Manos from AOL had the best response - 'Vendors, do not hand me an SLA that says 100% uptime with maintenance windows in it. 100% is either 100% or it's not.'

So let's focus on uptime because it's as easy to use a measurement tool as cash and can be used for applications, hardware, and the data center itself.

Who's with me?

Wednesday, April 18, 2012

Greenpeace is Throwing Rocks again

I was on Data Center Knowledge this morning reading about Greenpeace putting Apple's new data center in their crosshairs for not being Green enough. Isn't this Facebook all over again? Based on the data I have seen, it sure smells like it.

Can someone please point me to an energy product and a company that Greenpeace actually endorses? This rock throwing and shit stirring is old. I would love to see how many people use iPads at Greenpeace vs. charcoal on birchbark from fallen birch trees to read, write and comment on stories they post online.

It's easy to throw rocks and shir the shit pot, but not so easy to develop, test, market, and put your brand on a product or set of products that people want to use. To hide behind the fact that making things is not your business and provide no solution is another version of a whiny child, not a business. And therein lies the rub - Apple is a business. They have Shareholders - Greenpeace is an activist organization and has supporters. Shareholders use their money to give to a company to increase the value of the shares they own by making something useful. Greenpeace doesn't worry about having a bad quarter. Apple does. Greenpeace's currency is emotion, Apple's is cash.

So while Greenpeace has a vested interest in stirring up shit to create emotionally charged situations (that's how they make more 'currency' and stay relevant) Apple has a responsibility to make more money and if they care to engage in emotionally charged debates, they exchange their currency (money) for Greenpeace's (emotion). Since most people don't pay bills with emotion, it's a slippery slope for Apple to engage in currency speculation in these situations in my opinion.

I hope Apple points this out - that they are a business - not an emotion based activist organization, and have a responsibility to their shareholders. Period. And from what I can tell based on their cash position, they are being very responsible to their shareholders. As for building a data center that isn't green? Until there is a source of energy generation developed (by Greenpeace or someone else) that can produce an equivalent unit of power that burning coal does, or change the regulations in the United States to foster the research and development  for said source, go back to your birchbark and charcoal.

And one more thing... How many Apple shareholders rely on Greenpeace for their retirement?

Wednesday, April 4, 2012

Is a cloud OS really the answer?

I was just reading Phil Windley's latest blog entry introducing the concept of a Cloud OS. As I wrote my feedback I had an A-HA moment (no I did not start humming Take on Me) I realized that a Cloud OS may be the term but it's more of a new kind of OS - one that recognizes the user as the one unique ID, and the data, accounts, functionality, and devices important to the user are attributes and by design all the things important to the user go with the user - independent of OS, device, and application(s).

Once this was in place - users could simplify their experience around 'personas'. For instance I have a work email account and several personal ones that are tied to my blogs, interest groups, different geographies, and interests. These personas all are part of me the user, but occupy different meaning and use in my day to day life. They can be accessed on my phone, tablet, laptop, or other device. It's like Circles on Google+ only less integrated to one vendor.

That to me is the way to go - develop a single Veneer to which me - one user - can configure what I want based on my persona. Different privacy levels and chinese walls so my political views don't cross into my professional life where they have little importance or information about my health or marital status isn't broadcast to my marketing database or Facebook account. In other words I am Me. Online. Offline. Device independent. And can prove it with more than a mother's maiden name and PIN - quickly and beyond a shadow of a doubt and tell you who the girl was who kicked me in the balls at my second baseball practice in 3rd grade as my security question.

Friday, March 30, 2012

Google's next big challenge - Relevancy

http://blogs.computerworld.com/19955/heres_how_i_know_google_knows_too_much_about_me_california_bat_protection

I just read this article this morning and it reinforced something I started thinking about at the end of 2011 - how the heck is Google going to stay relevant in search when so many other apps have eclipsed it? Call me crazy but Facebook and Twitter have streams of more real time information about me to target ads against than Google does.

Google doesn't know where I am, maybe where I am going, or what I am doing when I get someplace. With Twitter and Facebook, I will publish this stuff real time and Facebook and Twitter will know more than Google, they just can't do much with the information since the algos aren't close to what Google's are.

I am waiting for the startup that will take advantage of the strengths of these apps and fix the weaknesses through a cross pollination of shit that matters, and give me offers that I want, are based on reality (present & future vs catalogued past), and make money. There has to be an API ninja that can capture the real time user data anonymously - or make it anonymous - and then run it through mature algos to target better ads. No?

Bueller? Anyone? Bueller?

Tuesday, March 27, 2012

modular models

I have spoken to a number of users in private enterprise and government over the past 3 years who had expressed interest in looking at modular data center solutions. The three main reasons that these discussions never culminated in a modular model of deployment were that the solutions were uproven, they were a niche play, and there was no third party who had the skillsets required to put a deal together that included site selection, site prep, purchase of site, modular units, and ongoing operation. A close fourth was that there was no available sales data from the manufacturers.

The 'niche' has changed a lot and I personally have seen hundreds in operation - all at single user sites. These single users can tap into their corporate real estate teams, their facilities teams, and their IT organizations to figure it out.

  • What about the broader market/multi tenant marketplace? Where's the beef?
  • Where do companies turn to get real estate expert  who knows how to source land for a data center, talk electricity voltages and network topography, dabbles in construction and site prep, and has a track record with data center operations?
  • Who can actually do an assessment of the vendors and see who knows their shit, and who is just spinning the latest marketing brochure?
I know many real estate professionals and I work with a handful that understand data centers, I know and work with electrical contractors and manufacturers who excel at that piece, and have walked through several manufacturers solutions from around the world that are deployed at customer sites, and being a data center operator I understand the nuances of running a facility. So is it a matter of everyone wants to just focus on their ingredient vs. be the chef, or is it something else?

I love that the space is maturing. It truly is about time.

By my own research, modular data centers and containers (yes they are different), cost 50-60% less than a traditional build. They also cost 50% less to operate. I have the numbers to prove this out. So the industry just got half as expensive to get into, it's half the cost to operate. I have watched an announcement from SKANSKA with deep intrigue, because it is a BIG signal to me that they get it, and the industry is in fact changing.

The announcement - I saw it at Data Center Knowledge - is important because it means a builder gets it, and they know what I know - the writing is on the wall for traditional data center builds because of cost if nothing else. 'The modular design will serve as the nucleus for an ambitious plan for build a series of colocation facilities around the United States, which Skanska will own and operate'. So here we have a construction company getting into the data center business because they know that they can deliver - and operate - a facility for HALF the cost. Looks like Nepholo has figured it out too.

Pay attention folks, the industry just blinked...

Friday, March 23, 2012

Great Event in Vienna VA yesterday

My company hosted an event with Data Center Marketplace yesterday in Vienna Virginia to discuss a number of hot topics in the data center space. I presented on site selection, and was amazed at how little knowledge is out there about how to do it. So I shared my site selection menthodology that we use when evaluating sites and had over 50 requests to get a copy. If you want one email me and it's yours.

The other topics were related to sustainability hitting on topics like efficiency in the data center, using REC's to offset carbon footprints, and different strategies to deliver data center services in a greener fashion. The thing that struck me was the pervasive disconnect between facilities, IT, and Finance. There is still little information shared across disciples and silos which is vital to doing something that is quantifiable, tangible, measureable, and successful. Information needs to shared not guarded like a stash of candy under your bed that you think you'll get in big trouble for if people see what you have. It really hit home when I was driving to the airport and listening to a program about Botnets and how industry groups have formed to foster information sharing. The premise was - hackers share information all the time and the organizations they break into don't and this is at the crux of the the problem and why IT security will always be on the defensive. So there was a group formed by the major ISP's that is operating as an informational Switzerland so people can leave their logos at the door and discuss freely about what they are doing to combat botnets, which ones exist and how to prevent (ideally) or remediate the issues they cause.

I have harped on this issue when the containerized data centers wouldn't share ANYTHING about their products, and would scratch their heads as to why they werent selling more. Well, if you can't talk about things that are issues, you have no issues, right? WRONG. Sharing information is key to overall success, and while the short sighted approach of 'share nothing for fear that a competitor will use data against you' is still prevalent not only in containerized data center discussions, it is prevalent in business. I don't get it. Your competitors will find out where the achilles heels are anyway, and isn't it better for the manufacturer to identify a flaw publicly than a competitor to announce it?

Wednesday, March 14, 2012

Maturity in the Container/modular space

2011 was an interesting year for the container data center space. There were several new entrants (modular), a maturing in the industry of where they fit into the grand scheme of data centers and an evolution by the existing manufacturers into the next generations of their initial offerings.

I will list the vendors that I know about and have met with and personally verified that their offering is in fact the real deal. This is a list and not a ranking by the way. If you want my personal professional opinion on this then track me down off the record:

CirraScale (f/k/a/ Verari)
HP
Dell
Blade Room
I/O

SmartCube
Elliptical Mobile Solutions (EMS)
SGI/Rackable

I am sure I will get calls from the vendors who will read the post and make 'corrections' to my positioning and commentary but I am vendor NEUTRAL and I have met with, seen, and in some cases sold one of these so my experiences and comments are based on seeing, smelling, and experiencing the products and the people who work for the companies who make them. Everyone has their own secret sauce, I get it. I also get that I am one of the few people on the planet who has not worked for a vendor who has any longevity in covering and paying attention to the space, so my comments are solely mine, as are my opinions.

The overall maturity in the space has gone from containers, to a true modular design, with the biggest difference being that modular units are the needed hybrid between the container and the traditional stick built data center. Essentially they are a data center built in a factory, vs. a data center retrofitted on site.

The bigger issue in adoption of this class of data center solution, I believe, is that companies have no single organization to turn to for evaluating if their needs are better served by a container, a modular design, or a traditional data center. The skillsets needed to evaluate the solutions require IT, facilities, real estate, electrical systems, cooling, site prep/construction, permitting, and finance skills. Since many large customers have people with these skillsets it is arguably easier to tap into the knowledge base, however politics can derail things quickly. That and an IBM container salesperson does not get paid to tout the HP Ecopod's features and benefits so it is vendor lock from the get go and that scares the sh*t out of most CTO/CIO's. The don't run just ___________ gear.

Let's get real for a moment - if the vendors wanted to eat their own dog food (or wear their own outfits), they would. HP would never build another data center, nor would IBM, I/O, or anyone else with the offering of the physical and operational components under one roof. They would be consolidating in droves out of their facilities into containers and that would would be a clear and consistent path forward. It would also be the smartest thing financially they could do.

The only issue is that none of their larger customers runs just that vendor's gear to support their business so it's a religious sell to get someone to convert, or you're taking an oval peg and jamming it in a round hole.

I have spent a fair amount of time looking hard at modular solutions - very different from a container- since it is close to a traditional data center by accepting 19 or 24 inch racks of ANY manufacturers hardware that you would put in a stick built facility, the biggest difference is that the data center is built in a factory so there is nothing to tour ahead of time in most cases. For some people this is a deal breaker, for others it doesn't matter since the absolute and measurable consistency of product, cost, and financial aspects trump the inability to walk through something.

As for the financial aspects of all of these NSB (non-stick-built) solutions, a modular data center is TOUGH to beat. In one recent study I saw the cost of a retrofit was ~$22M per megawatt. This is 4x the capital cost of the same power footprint of a modular solution. 400% and that is not a typo. On the OpEx (power/cooling/operation) side the modular designs deliver a PUE of 1.2 consistently. Knowing what I know about air flow and pimping the data center floor with different efficiency gaining tricks of the trade, I would think you get lower. Contrast that with a 2.0-3.0 PUE in traditional and older facilities you can cut your costs in half.

One of the most compelling examples I have thought about was based on a 20 megawatt proposed project. That's 20,000 Kw - plus a greenfield build cost of ~$2,000/foot. That's likely a 180,000 square foot size building (conservatively). So using back of the napkin math, that's roughly a $360 million dollar project that has to be COMMITTED to up front. Yes it would be built and financed in phases, but who wants a $360M obligation on their financial statement today. Especially when 20MW of a modular solution can be delivered for roughly $160M. No - not a typo. That is $200M less off the top and out of the gate. Operationally it's 14.2M in electricity (in North Carolina - a marketed cheap power state) for a 2.0 PUE facility vs. $8.5M/year for a 1.2 PUE modular. Thats $6M/year less. So dollars in over a 10 year period is $500M vs. $245M. I am no mathemetician but that's a big difference.

If you would like to discuss in any detail - ping me. I have sold a container to NASA, consulted on RFI and RFP documents for intelligence and military applications, and am truly vendor neutral. I am  running out of reasons to believe that the modular solutions are anything BUT the way to deliver data centers. Technically, financially, and environmentally. I love this evolution.

Others are taking note as referenced by a study Digital Realty Trust paid for to take the temperature of the data center growth patterns. The interesting nugget was that 41% of companies surveyed were looking at containers and/or modular solutions. Smart move.

email

Tuesday, February 28, 2012

Cloud Computing is the growth engine for the data center business

I was in Washington DC last week to meet with several cloud companies, walk one of them through my Silver Spring Maryland Data Center and network at the GovCloud event being held in DC. If you have read my blog at all over the past several years you will quickly figure out that I was not a fan of cloud. I didn't get it because it wasn't mature enough to offer anything better than what I could get at a managed services provider like Savvis/CenturyLink, or latisys (as examples).

What I saw last week has turned me on to what cloud delivers. Cloud computing, and the companies who embrace it, have gone from promising the world and delivering little, to actually looking at the architecture, looking at 'what if's' based on actual IT requirements, applying financial filters to the noise and coming up with a consumable offering.

One of the companies I met with - Piston Cloud - had a solid offering built on Open Stack. I knew their CEO Josh when he was at NASA and I was at CoreSite and we started a big ole sandbox for cloud companies (Eucalyptus, RightScale, etc)  to tie into what NASA was doing as well as get their software optimized on hardware platforms. HP was the dominant hardware vendor at the time. Fast forward and I left CoreSite to found a data center company and Josh left NASA to take what he had done at NASA and mature it into a commercially viable cloud OS and start Piston Cloud.

Our meeting was the first time we had connected face to face in a couple of years and we picked up where we left off. There was some reminiscing and laughing at mistakes we had made along the way in getting to where we were and there was something else - an electricity that was palpable when we shifted the discussion to actually using cloud to deliver a real solution - not just 'we do cloud'.

I live in the pipes and boxes/buildings that cloud resources use to provide the elasticity and scalability native to a cloud environment. One of my pet peeves was always the lack of discussion around security, having played in the IDM/Access control space with some large companies a few years ago. The data center I bought bucked the trend in a number of ways and I always believed the cloud vendors would mature and come back to earth and look at not only their public/private/hybrid offering but where they put their environments in the first place. So the data center I bought was NOT in Northern Virginia with 50 other providers, but in Maryland - the other State with hardened bunkers for Government and Military personnel in the event there was a major 'Oh Shit' again. The facility also had a global bank as a tenant so I know the security and the validation of the security would be embedded in the design of the facility - and I was right.

So when Josh and I sat down to talk cloud - and security - Piston Cloud was at a different layer in the stack, but also focused on a hardened solution for the cloud - in their case a hardened OS. Long story short, our core beliefs were embedded in what we were doing - delivering the possibility and the option of a secure Cloud OS from a secure facility with the audit trails to prove it.

There is much more to be discussed but it was great to see another company founder develop a solution that was centered around their core beliefs - security in the cloud is a problem, so let's fix the problem and go to market. I will blog more as time and NDA's allow, but for organizations enamored with the cloud - welcome. And for those organizations really looking for a secure option, I think we may have something worth talking (and blogging) about.

Wednesday, January 11, 2012

Cloud Security - Is cloud the industry Monorail?

For those of you not familiar with the Simpsons animated TV series, the title of the entry comes from Episode 413 - Marge vs. the Monorail. It became a widely used reference for ideas that could not stand up to logic, but became real when people's passions overtook their logic. In a nutshell - After Mr. Burns is caught storing his excess nuclear waste inside Springfield Park’s trees, he is ordered to pay the town $3 million. The town is originally set to agree to fix Main Street, but the charismatic Lyle Lanley interrupts and convinces the town to use the money to buy one of his monorails. Of course it doesn't work as advertised and there is a major safety issue that ends up threatening the town.

So what does this have to do with cloud?

Cloud is the latest IT buzzword that is having massive dollars thrown at it in an effort to provide all sorts of things, flexibility, elasticity, new paradigms of computing, the list goes on. What cloud didn't do early on was provide sufficient security, and so a new moniker was thrown out - the Private Cloud. That was the veneer on security for the cloud. Then cloud evolved again to hybrid cloud where you could mix and match Private Cloud and Public cloud based on the data that was involved. Ta da! We fixed it. Or so we thought.

Look, I get cloud. I love the idea of cloud. I think we will see the development and creation of even more paradigms that evolve over time but let's not forget the basic tenets of moving things outside the castle walls:

1. You are buying an SLA (Service Level Agreement). You are not buying a Cloud.
2. You are buying Risk. You are not buying a solution
3. Your Cloud will only do what it is designed to do. If your processes suck, the will suck in the Cloud too

When I read articles about outages - especially cloud outages - I look a lot deeper at what happened. Customers seem baffled (a.k.a. pissed off) that the cloud went down. I ask, well, did you design it to include the movement of data, workload, storage, and ultimately were you willing to pay for a level of redundancy you THOUGHT was included but wasn't? Remember you bought the SLA. You paid for the risk you were willing to accept. You made the call. The cloud did what it was supposed to. Failed when the site when down.

In all of the articles I have read, I have not seen any coverage of the type or tier of facility the Cloud is housed in? I'll bet I could offer Cloud served from the island of Jamaica for pennies and I would get laughed at. However if I offered cloud for pennies - my sales people's phone would ring off the hook. What's the difference? Disclosure. Assessing risk. And not assuming that the Cloud is what you THINK it is.

The Cloud is what you design and pay for. Whether it's in Jamaica in the back room of a Rum Bar or in a Tier IV facility in Silver Spring MD. The rules that are in the real world still apply in the Cloud world.

If it's highly valuable, treat it that way, and design it accordingly. Don't buy a monorail, no matter who is selling it.