tag:blogger.com,1999:blog-13037267130946816302024-03-13T10:38:19.327-07:00virtualization stuffMark MacAuley explores stuff he finds interesting or rant worthy about data centers, identity, virtualization and technology in general.Unknownnoreply@blogger.comBlogger144125tag:blogger.com,1999:blog-1303726713094681630.post-90910314304653483332012-04-23T09:10:00.001-07:002012-04-23T09:10:49.158-07:00Continuous Uptime vs. any other numberI was asked last week what the continuous uptime was of my data center. I knew generally, but did some digging and some math and learned that it was 125,000 + hours of continuous uptime.<br />
<br />
What really got me though was how little time and effort and coverage there was on the number. It got me thinking - with all of the coverage and articles and studies on efficiency, why the lack of focus on the only thing that matters when the shit hits the fan? Do you really think that in the middle of an outage that companies are freaking out over how awesome their PUE is? Me neither.<br />
<br />
So I did a blog post related to this which was the fact that we already have the ultimate yardstick when measuring our data center spend and the stuff that goes into it - cash. We can easily see if something is more or less than something else, and decide if we want to spend more or less, buy on value vs. price, etc.<br />
<br />
So why is there so little time spent on what the uptime number is?<br />
<br />
Well, if yours sucks, you want to be looking at other measurements that paint you in the most positive light, duh. My other hunch is that PUE (as a widely discussed example) is one of those nebulous numbers that is open to debate on how you measure it, and keeps the dust in the air so you don't see the only thing that matters - Uptime.<br />
<br />
Why do I say it's the only thing that matters?<br />
<br />
Because it is the bedrock on which SLA's are written. I was chairing a panel at a BisNow event in Virginia and I asked a question to the panel - What is the one thing you want to tell vendors in the audience today? - Mike Manos from AOL had the best response - 'Vendors, do not hand me an SLA that says 100% uptime with maintenance windows in it. 100% is either 100% or it's not.'<br />
<br />
So let's focus on uptime because it's as easy to use a measurement tool as cash and can be used for applications, hardware, and the data center itself.<br />
<br />
Who's with me?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-41024345105157408692012-04-18T07:54:00.000-07:002012-04-18T07:54:49.364-07:00Greenpeace is Throwing Rocks againI was on Data Center Knowledge this morning reading about Greenpeace putting Apple's new data center in their crosshairs for not being Green enough. Isn't this Facebook all over again? Based on the data I have seen, it sure smells like it.<br />
<br />
Can someone please point me to an energy product and a company that Greenpeace actually endorses? This rock throwing and shit stirring is old. I would love to see how many people use iPads at Greenpeace vs. charcoal on birchbark from fallen birch trees to read, write and comment on stories they post online.<br />
<br />
It's easy to throw rocks and shir the shit pot, but not so easy to develop, test, market, and put your brand on a product or set of products that people want to use. To hide behind the fact that making things is not your business and provide no solution is another version of a whiny child, not a business. And therein lies the rub - Apple is a business. They have Shareholders - Greenpeace is an activist organization and has supporters. Shareholders use their money to give to a company to increase the value of the shares they own by making something useful. Greenpeace doesn't worry about having a bad quarter. Apple does. Greenpeace's currency is emotion, Apple's is cash.<br />
<br />
So while Greenpeace has a vested interest in stirring up shit to create emotionally charged situations (that's how they make more 'currency' and stay relevant) Apple has a responsibility to make more money and if they care to engage in emotionally charged debates, they exchange their currency (money) for Greenpeace's (emotion). Since most people don't pay bills with emotion, it's a slippery slope for Apple to engage in currency speculation in these situations in my opinion. <br />
<br />
I hope Apple points this out - that they are a business - not an emotion based activist organization, and have a responsibility to their shareholders. Period. And from what I can tell based on their cash position, they are being very responsible to their shareholders. As for building a data center that isn't green? Until there is a source of energy generation developed (by Greenpeace or someone else) that can produce an equivalent unit of power that burning coal does, or change the regulations in the United States to foster the research and development for said source, go back to your birchbark and charcoal.<br />
<br />
And one more thing... How many Apple shareholders rely on Greenpeace for their retirement?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-67483447433560921552012-04-04T14:45:00.000-07:002012-04-04T14:45:24.987-07:00Is a cloud OS really the answer?I was just reading <a href="http://www.windley.com/archives/2012/04/personal_clouds_need_a_cloud_operating_system.shtml#comments" target="_blank">Phil Windley's latest blog entry</a> introducing the concept of a Cloud OS. As I wrote my feedback I had an A-HA moment (no I did not start humming <a href="http://www.youtube.com/watch?v=djV11Xbc914" target="_blank">Take on Me</a>) I realized that a Cloud OS may be the term but it's more of a new <em><u>kind</u></em> of OS - one that recognizes the user as the one unique ID, and the data, accounts, functionality, and devices important to the user are attributes and by design all the things important to the user go with the user - independent of OS, device, and application(s).<br />
<br />
Once this was in place - users could simplify their experience around 'personas'. For instance I have a work email account and several personal ones that are tied to my blogs, interest groups, different geographies, and interests. These personas all are part of me the user, but occupy different meaning and use in my day to day life. They can be accessed on my phone, tablet, laptop, or other device. It's like Circles on Google+ only less integrated to one vendor. <br />
<br />
That to me is the way to go - develop a single Veneer to which me - one user - can configure what I want based on my persona. Different privacy levels and chinese walls so my political views don't cross into my professional life where they have little importance or information about my health or marital status isn't broadcast to my marketing database or Facebook account. In other words I am Me. Online. Offline. Device independent. And can prove it with more than a mother's maiden name and PIN - quickly and beyond a shadow of a doubt and tell you who the girl was who kicked me in the balls at my second baseball practice in 3rd grade as my security question.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-19430907253608517132012-03-30T05:27:00.000-07:002012-03-30T05:27:24.524-07:00Google's next big challenge - Relevancy<a href="http://blogs.computerworld.com/19955/heres_how_i_know_google_knows_too_much_about_me_california_bat_protection">http://blogs.computerworld.com/19955/heres_how_i_know_google_knows_too_much_about_me_california_bat_protection</a><br />
<br />
I just read this article this morning and it reinforced something I started thinking about at the end of 2011 - how the heck is Google going to stay relevant in search when so many other apps have eclipsed it? Call me crazy but Facebook and Twitter have streams of more real time information about me to target ads against than Google does. <br />
<br />
Google doesn't know where I am, maybe where I am going, or what I am doing when I get someplace. With Twitter and Facebook, I will publish this stuff real time and Facebook and Twitter will know more than Google, they just can't do much with the information since the algos aren't close to what Google's are. <br />
<br />
I am waiting for the startup that will take advantage of the strengths of these apps and fix the weaknesses through a cross pollination of shit that matters, and give me offers that I want, are based on reality (present & future vs catalogued past), and make money. There has to be an API ninja that can capture the real time user data anonymously - or make it anonymous - and then run it through mature algos to target better ads. No?<br />
<br />
Bueller? Anyone? Bueller?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-66272874821499549922012-03-27T11:49:00.000-07:002012-03-27T11:49:47.705-07:00modular modelsI have spoken to a number of users in private enterprise and government over the past 3 years who had expressed interest in looking at modular data center solutions. The three main reasons that these discussions never culminated in a modular model of deployment were that the solutions were uproven, they were a niche play, and there was no third party who had the skillsets required to put a deal together that included site selection, site prep, purchase of site, modular units, and ongoing operation. A close fourth was that there was no available sales data from the manufacturers. <br />
<br />
The 'niche' has changed a lot and I personally have seen hundreds in operation - all at single user sites. These single users can tap into their corporate real estate teams, their facilities teams, and their IT organizations to figure it out. <br />
<br />
<ul><li>What about the broader market/multi tenant marketplace? Where's the beef?</li>
<li>Where do companies turn to get real estate expert who knows how to source land for a data center, talk electricity voltages and network topography, dabbles in construction and site prep, and has a track record with data center operations?</li>
<li>Who can actually do an assessment of the vendors and see who knows their shit, and who is just spinning the latest marketing brochure?</li>
</ul>I know many real estate professionals and I work with a handful that understand data centers, I know and work with electrical contractors and manufacturers who excel at that piece, and have walked through several manufacturers solutions from around the world that are deployed at customer sites, and being a data center operator I understand the nuances of running a facility. So is it a matter of everyone wants to just focus on their ingredient vs. be the chef, or is it something else?<br />
<br />
I love that the space is maturing. It truly is about time. <br />
<br />
By my own research, modular data centers and containers (yes they are different), cost 50-60% less than a traditional build. They also cost 50% less to operate. I have the numbers to prove this out. So the industry just got half as expensive to get into, it's half the cost to operate. I have watched an announcement from <a href="http://www.skanska.com/" target="_blank">SKANSKA</a> with deep intrigue, because it is a BIG signal to me that they get it, and the industry is in fact changing. <br />
<br />
The announcement - I saw it at <a href="http://www.datacenterknowledge.com/archives/2012/03/26/skanska-usa-goes-modular-eyes-colo-market/" target="_blank">Data Center Knowledge</a> - is important because it means a builder gets it, and they know what I know - the writing is on the wall for traditional data center builds because of cost if nothing else. 'The modular design will serve as the nucleus for an ambitious plan for build a series of colocation facilities around the United States, which Skanska will own and operate'. So here we have a construction company getting into the data center business because they know that they can deliver - and operate - a facility for HALF the cost. Looks like <a href="http://www.nepholo.com/" target="_blank">Nepholo</a> has figured it out too.<br />
<br />
Pay attention folks, the industry just blinked...Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-20498014545399725342012-03-23T08:34:00.000-07:002012-03-23T08:34:59.231-07:00Great Event in Vienna VA yesterdayMy company hosted an event with Data Center Marketplace yesterday in Vienna Virginia to discuss a number of hot topics in the data center space. I presented on site selection, and was amazed at how little knowledge is out there about how to do it. So I shared my site selection menthodology that we use when evaluating sites and had over 50 requests to get a copy. If you want one email me and it's yours.<br />
<br />
The other topics were related to sustainability hitting on topics like efficiency in the data center, using REC's to offset carbon footprints, and different strategies to deliver data center services in a greener fashion. The thing that struck me was the pervasive disconnect between facilities, IT, and Finance. There is still little information shared across disciples and silos which is vital to doing something that is quantifiable, tangible, measureable, and successful. Information needs to shared not guarded like a stash of candy under your bed that you think you'll get in big trouble for if people see what you have. It really hit home when I was driving to the airport and listening to a program about Botnets and how industry groups have formed to foster information sharing. The premise was - hackers share information all the time and the organizations they break into don't and this is at the crux of the the problem and why IT security will always be on the defensive. So there was a group formed by the major ISP's that is operating as an informational Switzerland so people can leave their logos at the door and discuss freely about what they are doing to combat botnets, which ones exist and how to prevent (ideally) or remediate the issues they cause.<br />
<br />
I have harped on this issue when the containerized data centers wouldn't share ANYTHING about their products, and would scratch their heads as to why they werent selling more. Well, if you can't talk about things that are issues, you have no issues, right? WRONG. Sharing information is key to overall success, and while the short sighted approach of 'share nothing for fear that a competitor will use data against you' is still prevalent not only in containerized data center discussions, it is prevalent in business. I don't get it. Your competitors will find out where the achilles heels are anyway, and isn't it better for the manufacturer to identify a flaw publicly than a competitor to announce it?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-60032309887889062962012-03-14T10:47:00.000-07:002012-03-14T10:47:34.665-07:00Maturity in the Container/modular space2011 was an interesting year for the container data center space. There were several new entrants (modular), a maturing in the industry of where they fit into the grand scheme of data centers and an evolution by the existing manufacturers into the next generations of their initial offerings.<br />
<br />
I will list the vendors that I know about and have met with and personally verified that their offering is in fact the real deal. This is a list and not a ranking by the way. If you want my personal professional opinion on this then track me down off the record:<br />
<br />
<a href="http://cirrascale.com/forest.asp" target="_blank">CirraScale</a> (f/k/a/ Verari)<br />
<a href="http://h17007.www1.hp.com/us/en/whatsnew/june/060611-3.aspx" target="_blank">HP</a><br />
<a href="http://en.community.dell.com/dell-blogs/enterprise/b/inside-enterprise-it/archive/2010/09/10/dell-s-modular-data-center-hello-world.aspx" target="_blank">Dell</a><br />
<a href="http://bladeroom.com/" target="_blank">Blade Room</a><br />
<a href="http://www.iodatacenters.com/" target="_blank">I/O</a><br />
<br />
<a href="http://smartcube.net/" target="_blank">SmartCube</a><br />
<a href="http://ellipticalmobilesolutions.com/" target="_blank">Elliptical Mobile Solutions (EMS)</a><br />
<a href="http://www.sgi.com/products/data_center/ice_cube/" target="_blank">SGI/Rackable</a><br />
<br />
I am sure I will get calls from the vendors who will read the post and make 'corrections' to my positioning and commentary but I am vendor NEUTRAL and I have met with, seen, and in some cases sold one of these so my experiences and comments are based on seeing, smelling, and experiencing the products and the people who work for the companies who make them. Everyone has their own secret sauce, I get it. I also get that I am one of the few people on the planet who has not worked for a vendor who has any longevity in covering and paying attention to the space, so my comments are solely mine, as are my opinions.<br />
<br />
The overall maturity in the space has gone from containers, to a true modular design, with the biggest difference being that modular units are the needed hybrid between the container and the traditional stick built data center. Essentially they are a data center built in a factory, vs. a data center retrofitted on site.<br />
<br />
The bigger issue in adoption of this class of data center solution, I believe, is that companies have no single organization to turn to for evaluating if their needs are better served by a container, a modular design, or a traditional data center. The skillsets needed to evaluate the solutions require IT, facilities, real estate, electrical systems, cooling, site prep/construction, permitting, and finance skills. Since many large customers have people with these skillsets it is arguably easier to tap into the knowledge base, however politics can derail things quickly. That and an IBM container salesperson does not get paid to tout the HP Ecopod's features and benefits so it is vendor lock from the get go and that scares the sh*t out of most CTO/CIO's. The don't run just ___________ gear.<br />
<br />
Let's get real for a moment - if the vendors wanted to eat their own dog food (or wear their own outfits), they would. HP would never build another data center, nor would IBM, I/O, or anyone else with the offering of the physical and operational components under one roof. They would be consolidating in droves out of their facilities into containers and that would would be a clear and consistent path forward. It would also be the smartest thing financially they could do. <br />
<br />
The only issue is that none of their larger customers runs just <em>that </em>vendor's gear to support their business so it's a religious sell to get someone to convert, or you're taking an oval peg and jamming it in a round hole. <br />
<br />
I have spent a fair amount of time looking hard at modular solutions - very different from a container- since it is close to a traditional data center by accepting 19 or 24 inch racks of ANY manufacturers hardware that you would put in a stick built facility, the biggest difference is that the data center is built in a factory so there is nothing to tour ahead of time in most cases. For some people this is a deal breaker, for others it doesn't matter since the absolute and measurable consistency of product, cost, and financial aspects trump the inability to walk through something. <br />
<br />
As for the financial aspects of all of these NSB (non-stick-built) solutions, a modular data center is TOUGH to beat. In one recent study I saw the cost of a retrofit was ~$22M per megawatt. This is 4x the capital cost of the same power footprint of a modular solution. 400% and that is not a typo. On the OpEx (power/cooling/operation) side the modular designs deliver a PUE of 1.2 consistently. Knowing what I know about air flow and pimping the data center floor with different efficiency gaining tricks of the trade, I would think you get lower. Contrast that with a 2.0-3.0 PUE in traditional and older facilities you can cut your costs in half.<br />
<br />
One of the most compelling examples I have thought about was based on a 20 megawatt proposed project. That's 20,000 Kw - plus a greenfield build cost of ~$2,000/foot. That's likely a 180,000 square foot size building (conservatively). So using back of the napkin math, that's roughly a $360 million dollar project that has to be COMMITTED to up front. Yes it would be built and financed in phases, but who wants a $360M obligation on their financial statement today. Especially when 20MW of a modular solution can be delivered for roughly $160M. No - not a typo. That is $200M less off the top and out of the gate. Operationally it's 14.2M in electricity (in North Carolina - a marketed cheap power state) for a 2.0 PUE facility vs. $8.5M/year for a 1.2 PUE modular. Thats $6M/year less. So dollars in over a 10 year period is $500M vs. $245M. I am no mathemetician but that's a big difference.<br />
<br />
If you would like to discuss in any detail - ping me. I have sold a container to NASA, consulted on RFI and RFP documents for intelligence and military applications, and am <strong><u>truly vendor neutral</u></strong>. I am running out of reasons to believe that the modular solutions are anything BUT the way to deliver data centers. Technically, financially, and environmentally. I love this evolution.<br />
<br />
Others are taking note as referenced by a <a href="http://www.datacenterknowledge.com/archives/2012/03/12/cloud-growth-spurs-demand-for-data-centers/" target="_blank">study Digital Realty Trust paid for</a> to take the temperature of the data center growth patterns. The interesting nugget was that 41% of companies surveyed were looking at containers and/or modular solutions. Smart move.<br />
<br />
<a href="mailto:mark.macauley@gmail.com" target="_blank">email</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-19048280400610763462012-02-28T05:50:00.000-08:002012-02-28T05:50:37.445-08:00Cloud Computing is the growth engine for the data center businessI was in Washington DC last week to meet with several cloud companies, walk one of them through my Silver Spring Maryland Data Center and network at the GovCloud event being held in DC. If you have read my blog at all over the past several years you will quickly figure out that I was not a fan of cloud. I didn't get it because it wasn't mature enough to offer anything better than what I could get at a managed services provider like Savvis/CenturyLink, or latisys (as examples).<br />
<br />
What I saw last week has turned me on to what cloud delivers. Cloud computing, and the companies who embrace it, have gone from promising the world and delivering little, to actually looking at the architecture, looking at 'what if's' based on actual IT requirements, applying financial filters to the noise and coming up with a consumable offering.<br />
<br />
One of the companies I met with - Piston Cloud - had a solid offering built on Open Stack. I knew their CEO Josh when he was at NASA and I was at CoreSite and we started a big ole sandbox for cloud companies (Eucalyptus, RightScale, etc) to tie into what NASA was doing as well as get their software optimized on hardware platforms. HP was the dominant hardware vendor at the time. Fast forward and I left CoreSite to found a data center company and Josh left NASA to take what he had done at NASA and mature it into a commercially viable cloud OS and start Piston Cloud.<br />
<br />
Our meeting was the first time we had connected face to face in a couple of years and we picked up where we left off. There was some reminiscing and laughing at mistakes we had made along the way in getting to where we were and there was something else - an electricity that was palpable when we shifted the discussion to actually using cloud to deliver a real solution - not just 'we do cloud'.<br />
<br />
I live in the pipes and boxes/buildings that cloud resources use to provide the elasticity and scalability native to a cloud environment. One of my pet peeves was always the lack of discussion around security, having played in the IDM/Access control space with some large companies a few years ago. The data center I bought bucked the trend in a number of ways and I always believed the cloud vendors would mature and come back to earth and look at not only their public/private/hybrid offering but where they put their environments in the first place. So the data center I bought was NOT in Northern Virginia with 50 other providers, but in Maryland - the other State with hardened bunkers for Government and Military personnel in the event there was a major 'Oh Shit' again. The facility also had a global bank as a tenant so I know the security and the validation of the security would be embedded in the design of the facility - and I was right.<br />
<br />
So when Josh and I sat down to talk cloud - and security - Piston Cloud was at a different layer in the stack, but also focused on a hardened solution for the cloud - in their case a hardened OS. Long story short, our core beliefs were embedded in what we were doing - delivering the possibility and<i><b> the option</b></i> of a secure Cloud OS from a secure facility with the audit trails to prove it.<br />
<br />
There is much more to be discussed but it was great to see another company founder develop a solution that was centered around their core beliefs - security in the cloud is a problem, so let's fix the problem and go to market. I will blog more as time and NDA's allow, but for organizations enamored with the cloud - welcome. And for those organizations really looking for a secure option, I think we may have something worth talking (and blogging) about.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-544384852317077812012-01-11T07:03:00.000-08:002012-01-11T07:03:33.952-08:00Cloud Security - Is cloud the industry Monorail?For those of you not familiar with the Simpsons animated TV series, the title of the entry comes from Episode 413 - Marge vs. the Monorail. It became a widely used reference for ideas that could not stand up to logic, but became real when people's passions overtook their logic. In a nutshell - After Mr. Burns is caught storing his excess nuclear waste inside Springfield Park’s trees, he is ordered to pay the town $3 million. The town is originally set to agree to fix Main Street, but the charismatic Lyle Lanley interrupts and convinces the town to use the money to buy one of his monorails. Of course it doesn't work as advertised and there is a major safety issue that ends up threatening the town.<br />
<br />
So what does this have to do with cloud?<br />
<br />
Cloud is the latest IT buzzword that is having massive dollars thrown at it in an effort to provide all sorts of things, flexibility, elasticity, new paradigms of computing, the list goes on. What cloud didn't do early on was provide sufficient security, and so a new moniker was thrown out - the Private Cloud. That was the veneer on security for the cloud. Then cloud evolved again to hybrid cloud where you could mix and match Private Cloud and Public cloud based on the data that was involved. Ta da! We fixed it. Or so we thought.<br />
<br />
Look, I get cloud. I love the idea of cloud. I think we will see the development and creation of even more paradigms that evolve over time but let's not forget the basic tenets of moving things outside the castle walls:<br />
<br />
1. You are buying an SLA (Service Level Agreement). You are not buying a Cloud.<br />
2. You are buying Risk. You are not buying a solution<br />
3. Your Cloud will only do what it is designed to do. If your processes suck, the will suck in the Cloud too<br />
<br />
When I read articles about outages - especially cloud outages - I look a lot deeper at what happened. Customers seem baffled (a.k.a. pissed off) that the cloud went down. I ask, well, did you design it to include the movement of data, workload, storage, and ultimately were you willing to pay for a level of redundancy you THOUGHT was included but wasn't? Remember you bought the SLA. You paid for the risk you were willing to accept. You made the call. The cloud did what it was supposed to. Failed when the site when down.<br />
<br />
In all of the articles I have read, I have not seen any coverage of the type or tier of facility the Cloud is housed in? I'll bet I could offer Cloud served from the island of Jamaica for pennies and I would get laughed at. However if I offered cloud for pennies - my sales people's phone would ring off the hook. What's the difference? Disclosure. Assessing risk. And not assuming that the Cloud is what you THINK it is. <br />
<br />
The Cloud is what you design and pay for. Whether it's in Jamaica in the back room of a Rum Bar or in a Tier IV facility in Silver Spring MD. The rules that are in the real world still apply in the Cloud world. <br />
<br />
If it's highly valuable, treat it that way, and design it accordingly. Don't buy a monorail, no matter who is selling it.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-65317700885217186422011-12-13T08:09:00.000-08:002011-12-13T08:09:58.977-08:00PUE, DCiE, and other stuff that doesn't matter much...The Gartner conference was one of their best attended events in 2011 proving yet again that the space is hot, appears to have a bright future, and like the other hot sectors before it, is trying to mature through the development of standardized measurements that will be embraced by vendors, users, and analysts and give owner-operators bragging rights and a new area to compete in. PUE, DCiE and other yardsticks are competing for the 'best' yardstick for efficiency.<br />
<br />
Sorry to pee in your Wheaties folks - we already have one. Cash.<br />
<br />
I have done enough deals to know that PUE matters before financial analysts get involved. Once they do it comes down to costs. Total costs. Lowest cost wins. Not every time, but show me a CFO who authorizes paying more for something than its worth as a rule. <br />
<br />
I will also put a caveat in to say that this is more true of multi tenant facilities than single tenant facilities. Single tenant/purpose built data centers can do what they want so long as it makes financial sense and delivers on the business objectives for the company and THAT company only. Multi tenant facilities must be far more INCLUSIVE of a wider array of requirements to service the greatest number of clients and their desires.<br />
<br />
Both Single and multi tenant data centers will have a mix of manufacturers, densities, layouts, loads and preferences. The multi tenant facility needs to factor in broad compliance, placement of densities, weight, and other highly variable nuances. Like Rick on <a href="http://www.history.com/shows/pawn-stars" target="_blank">Pawn Stars</a> says - 'You never know what is going to walk through that door.'Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-1303726713094681630.post-28231015665082253662011-12-08T08:15:00.000-08:002011-12-08T08:15:42.948-08:00Gartner Data Center Conference brain dump...I just got back from 3 1/2 days in Las Vegas at the Gartner Data Center Conference, and will be writing about many of the things they presented, only in a lot more detail. In fact that was the #1 piece of feedback I heard was that the presentations are more sizzle than steak, and that they have to do a WHOLE lot more in providing the details about what they present.<br />
<br />
Monday's keynote was pretty good and was given by Dave Cappuccio. It covered many stats and numbers that were fun to listen to, and I thought it was a decent way to ease into a conference. Some of the interesting nuggets I heard:<br />
<br />
30,000,000,000 (billion) pieces of content were added to Facebook in the past month<br />
Worldwide IP traffic will quadruple by 2015 - care to guess why?<br />
<br />
Data Centers consume 100 times more electricity than the offices they support<br />
The percentage of a college student's texting to phone call ratio is 98.4% to 1.6%<br />
Within 4 years the bandwidth I/O per rack will increase 25x<br />
<br />
Needless to say this is all pointing to growth - whether a company is prepared for it or not. There are some other topics I will be jumping into over the next few weeks as Gartner left a lot to be desired and discussed and in some cases were pretty off. So Stay tuned and lets see what I come up with once I shake the red eye jet lag...Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-57098103271891737222011-11-09T13:29:00.000-08:002011-11-09T13:29:01.249-08:00Is innovation exclusively for single tenant datacenters?I am a data center geek. I love to read about them, see who is doing what, learn about new stuff and approaches that are being implemented - all of it. What I have begun to notice is that the innovation that is happening - while totally awesome - will never fly in 95 out of 100 facilities in the world. I have begun to ask the question 'why' to other geeks and data center users.<br />
<br />
Without question the number one response is - because when you run a single tenant facility you can take risks and do things that mitigate a common set of risks - and since one company pays, they can do what makes sense for their business alone. <br />
<br />
Others include - 'because they have an innovation budget'; 'it's about controlling their destiny'; 'the penalties for an outage are not as severe as outside the single tenant walls'.<br />
<br />
In my opinion I think it has to do with a combination of factors - most of which have been mentioned above with one exception - evolution. As companies move from a cabinet in the ghetto colo company, they evolve to require more choices and options, different configurations, and vendors who deal with companies the size they hope to become. Once they scale up to ~10MW (100,000 square feet) they evolve into looking at finish to suits and finally building their own. Why do I think this?<br />
<br />
I saw it happen with Facebook. They started by leasing cabinets in a few key markets, scaling that, then taking cages, then taking suites, then floors of buildings and now are building their own. Their headcount for their data center operation grew to that of a mid size provider, and there does come a point where it is more cost effective to build your own than to have to keep moving like a shark looking for open suites, floors, or whitespace. <br />
<br />
I have seen it happen in reverse with many banks too. They built their own and leased where they wanted, acquired with abandon, and then had an impressive footprint. Then they figured out - as many did - that the network was more important than real estate and so the consolidation began where they wanted to consolidate into fewer huge footprint super centers because it was more effective to run a few huge facilities vs. dozens of all different sizes. <br />
<br />
So when I think about why does all of the innovation appear to happen in the huge facilities, I look at the evolution of Apple, Facebook, Google, ING, Citibank, Morgan Stanley, etc. and they all find a different path to the same location - a risk averse facility that makes economical sense. Innovation is tied to reducing risk or cost and when you are the only one who has to live with the decisions and their outcome, single tenant facilities will continue to out-innovate the multi-tenant facilities since they have an exponentially larger set of risks and requirements to service. <br />
<br />
What do you think?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-16574254113083812352011-09-08T15:22:00.000-07:002011-09-08T15:22:46.593-07:00Google's Data Centers use 26 Megawatts - What's the fuss?In the past 24 hours there has been a <a href="http://www.nytimes.com/2011/09/09/technology/google-details-electricity-output-of-its-data-centers.html">ton of press</a> about Google's announcement that they use 26 million watts of electricity. Put another way, that is 26 Megawatts, and put another way, what is all the fuss about over 26 megwatts?<br />
<br />
Is it that they ONLY use 26 Megwatts?<br />
It it that Google is so efficient that they run this gigantic brand on such a small amount of electricity?<br />
<br />
That has to be it, because folks, 26 megawatts in the data center business is a medium size operation.<br />
<br />
Just for comparison's sake, <a href="http://vantagedatacenters.com/">Vantage data centers</a> is building a campus twice that size in Santa Clara. Digital Realty Trust has a project in Dallas that is 4 TIMES that size to service clients. <a href="http://www.bytegrid.com/">Bytegrid Holdings LLC</a> has a 9.6 megawatt facility about one third the size of Google's entire operation. Just sayin'.<br />
<br />
What I also have yet to review fully is the carbon footprint. It was mentioned in <a href="http://www.datacenterknowledge.com/archives/2011/09/08/googles-energy-story-high-efficiency-huge-scale/">another, better, article</a> that they get 25% of their electricity from renewable sources. Not LOW CARBON sources, but renewable sources. The thing I have paid attention to for years is what I call the carbon chain - what is the carbon footprint from start to finish on a project, and then the impact on an ongoing operational basis. And I'm sorry buying a carbon credit doesn't count. That's like buying recycled paper with Monopoly money. <br />
<br />
It mentions that Google builds their own facilities. What about the carbon it takes to manufacture the steel, the concrete, the copper, and other raw materials used in a facility? Then there is the transportation. How many trucks filled with diesel are trucking in heavy loads of new equipment that was produced from scratch and had to source the raw materials, and you get the picture.<br />
<br />
Why not re-use a facility as a rule vs build new? Why not cap the radius of transport of newly manufactured goods to 100 miles or less? Why not look at the whole impact and subsequent carbon footprint and measure and improve that? Just sayin'...<br />
<br />
Still, great numbers by Google. <br />
<br />
And a side note to Noah Horowitz at the Natural Resources Defense Council who '...cautioned that despite the advent of increasingly powerful and energy-efficient computing tools, electricity use at data centers was still rising, as every major corporation now relied on them. He said the figures did not include the electricity drawn by the personal computers, tablets and iPhones that use information from Google’s data centers. <br />
<br />
“When we hit the Google search button,” Mr. Horowitz said, “it’s not for free.”<br />
<br />
Trying to link energy usage of other corporations, personal computers, and iPhones to Google's numbers is like blaming a person's alcoholism on the fermentation process. It's a stretch at best. And what are you going to do to stop this reckless usage of the most efficient data center footprint on the planet, stop Googling? Didn't think so. Me neither.<br />
<br />
Nothings free, but this is data that shows if we are going to search and want to be green about it, Go Green, Go Google.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-1303726713094681630.post-14060317137769339952011-08-25T13:34:00.000-07:002011-08-25T13:34:52.031-07:00Hybrid Cloud = Same Sh*t different Offering?I have had a number of conversations the past two weeks about Cloud computing and what companies are asking for and what they are actually buying. My observation is this:<br />
<br />
Companies want providers to offer public and private cloud elasticity, but only buy hybrid. <br />
<br />
This is following another form of pretzel logic (yes I like <a href="http://www.steelydan.com/">Steely Dan</a>. A lot.) which are the folks who say they need to be outside of THE blast zone. (What zone? For which type of blast?)<br />
<br />
First one first - duh. Was there ever ANY other choice than they hybrid cloud? Not all data is meant for public consumption in spite of compliance rules trying to impose transparency (visibility) and so the notion of a public cloud was right out of Woodstock, man. The public cloud was exciting because it took the pressure off of IT departments to have to plan way ahead of stuff. If they needed capacity, they went out and got it someplace that had it and didn't have to go to a CFO and ask for $500K of gear to support the new marketing initiative or cover their *ss when they underestimated the website traffic when <a href="http://www.justinbiebermusic.com/">Justin Bieber</a> played a benefit concert for <a href="http://www.weather.com/">Hurricane Irene</a> survivors. It solved a lot of unmaterialized logistics and scoping problems and was branded as being insecure and we couldn't have that in the age of WikiLeaks and Anonymous hacking could we...<br />
<br />
Then the pendulum of cloud swung back to private cloud. Well by my assessment private cloud was just another way to get billed for computer stuff by the hour. Private Cloud computing was becoming the No-tell Motel option and charging by the hour. If you wanted something quick and dirty and the movie titles not printed on the receipt then you could get your resources by the hour. In this case all the data and network and everything about the environment was kept out of plain sight and secured.<br />
<br />
Then it's as if the blinking light went steady on us - we can have both things - the ability to not have to know in advance what we will need for resources AND the ability to keep sensitive stuff more secure.<br />
<br />
Is it just me or isnt this what managed colocation and managed services have been doing for the past 20+ years?<br />
<br />
<br />
<br />
<br />
<br />
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-7376138464864166842011-08-18T11:27:00.000-07:002011-08-18T11:27:07.608-07:00Why are containerized modular data centers the new Black now?What took you so long?<br />
<br />
I was at Data Center Dynamics in Wasington DC on Tuesday and you may have thought it was Modular Dynamics instead. It seemed as if the data center world finally came around to what I have been seeing, studying, deploying, and writing about since 2008. That's not to infer that the modular solutions have gone mainstream yet, but the conceptualization has matured a lot as expected.<br />
<br />
HP had a scale model of the EcoPod which was cool to see since up to that point I had only seen Powerpoint slides about it and have yet to see the real thing. PDI was showing off their modular solution as well which will be watched closely as they have some patent infringement allegations they are fending off. ActivePower had their PowerHouse scale model on display as well and given the coziness between HP and ActivePower in recently announced deployments I could easily see them sharing a booth at future trade events.<br />
<br />
Bytegrid was the only data center owner operator who was able to speak to containers at the event - and did openly talk to several people about the good the bad and the stuff that trips you up on deployment and had specifics to back up the discussions. The three other owner operators there - GigaPark, QTS, and Powerloft weren't talking about supporting modular solutions.<br />
<br />
Based on what I saw, acceptance has increased dramatically, demand is rising right along with it, and the Achilles heel that was there three years ago is there today still - being able to answer 'So where can I put one of these things?' with something other than 'Wherever you want'. ByteGrid is on to something and will be watched.<br />
<br />
<br />
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-12465211858283523122011-07-26T06:56:00.000-07:002011-07-26T06:56:02.729-07:00Rethinking the Value (and cost) of dataI have had a number of conversations recently with some hospitals about storage requirements, and specifically where to put more gear as their data centers are bursting at the seams, or in some cases becoming structurally weakened because of floor loads. What I figured out pretty quickly is that there is an issue that is fairly ubiquitous to these organizations, but an issue that I believe spills over into any firm who stores data - all data is treated the same and is costing firms millions each year to treat all their data the same.<br />
<br />
Here is one example:<br />
<br />
A hospital is looking for data center space to grow their storage footprint into as they have tapped out every watt of space they have in the hospital. That's not even the real issue. The real issue is that because they have so many racks of gear, and heavy storage arrays on floors of the hospital with floor loads insufficient to support the weight, that the hospital is experiencing structural issues. Floors are sagging under the weight of all of the data being stored. Literally.<br />
<br />
Many vendors they have talked to are simply telling them that's unfortunate but they need to buy more storage. However, the vendor won't get the sale unless his uncle is in the construction business and can structurally retrofit higher floor loads before the new arrays show up. So what does the hospital do? <br />
<br />
Hospitals see patients, maintain facilities, manage compliance conformity, and administer services. Their IT guys set up storage arrays, networking equipment, and manage the applications that run the business. They are not in construction and they are not data center experts and yet are the go to guys to figure all of this out.<br />
The solution that we began to discuss - and that I wanted to share - is that there is a fundamental change that MUST take place in how they think about data - storing it, managing it, accessing it, and realizing it's NOT all the same. The other thing that had to happen was to look at the way their business operates (no pun intended) and structure things in a way that are based on how their business runs, and not how their data flows today, but how it needs to flow based on their business.<br />
<br />
Here is a specific example -<br />
<br />
New patients create alot of new records. New paperwork, new insurance information, new MRI's, new test results, new billing codes, new invoices, etc.<br />
<br />
Existing patients have this data on file and their data footprint changes with an office visit, an MRI, a perscription, and all of the associated charting that needs to happen to document the recent activity and results.<br />
<br />
What are the commonalities of the two different kinds of patients? They create data around events - visits, procedures, tests. What are events tied to? A date. Can the date field be used to assess the freshness of data, and another date identifying the next event indicate the likelihood of accessing that data? Hmmmm. Interesting thought stream just started.<br />
<br />
Where we got to was that whether or not the patient was new or existing, their events drove the need to access data. A new patient would likely need to be seen again shortly and would need to have their records accessible whenever. An existing patient who came in to have a sudden sports injury looked at would probably need to have their data file accessible since there would be consults, MRI's/X-rays, referrals, etc. taking place rather immediately.<br />
<br />
A patient who was on a 'check in once a year' cycle, doesn't need their data accessed 363 days a year on average. They come in, get looked at, maybe a test is required (but we know what it is ahead of time) so why is the 'active' patient's data sitting with a 'maintenance' patient's data in the same facility, withe the same cost, and with the same SLA (service level agreement) or DAA - data access agreement? Why would you pay for 363 days of something you don't need? I equate it to paying for a rental car in Seattle for 365 days, while I am only in Seattle two days a year - why would I do that? <br />
<br />
The point here is that yes, they do need more storage. They can't put it where they usually do - in their data center - and need to find someplace else to expand to. But before they just go out and buy new arrays, new cabinets, new servers, and lease new space, they MUST look at things in a new way so that not only are they solving the immediate issue -'we need more storage' they are solving the fundamental issue - all data is NOT the same and does not need to be managed the same, or cost the same. <br />
<br />
Thoughts?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-89071625673413276932011-06-15T10:25:00.000-07:002011-06-15T10:25:10.987-07:00ByteGrid Launched<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-wJyUaKseOps/TfjqiQMxlEI/AAAAAAAAADg/JgnXoGx_iyY/s1600/silverspringaerial.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="213px" src="http://1.bp.blogspot.com/-wJyUaKseOps/TfjqiQMxlEI/AAAAAAAAADg/JgnXoGx_iyY/s320/silverspringaerial.jpg" t8="true" width="320px" /></a></div><br />
I thought it was high time I got around to announcing the launch of my company - <a href="http://www.bytegrid.com/">ByteGrid</a>. As the name implies it is the fusion of data (Byte) and Electricity & Telecom infrastructure (Grid) and since we are a data center company, fitting as a short & sweet description.<br />
<br />
I few friends have encouraged my to blog about the whole experience and now that we're a real company, I find I have a lot less time to tell the whole story but a few things are important for others to know and were extremely important to me personally to know:<br />
<br />
1. Building a company is difficult. <br />
2. Starting a company is more difficult<br />
3. Standing on the other side of the Starting line, with the Finish line out ahead of you is incredibly rewarding and puts the difficulty in perspective<br />
4. Many friends and family will be supportive the first 30 days after the decision to go out and do something on your own, and downright mean and skeptical from then on. They are more scared than you are.<br />
5. Do not EVER let someone talk you out of what you know and believe to be the right way to do something, especially when you hear 'If you changed _________ you would get funded faster...' or anything that dilutes your vision. The vision is yours, not theirs and only you know the right way to execute the vision. A lot of people have money. few have a vision, and even fewer the intestinal fortitude to stay true to their vision.<br />
6. Once you take on other people's money, they have a big say in how things get done. And they should.<br />
7. If you are not used to constant change, competing demands, and like things nice and orderly, you won't like what you're doing. The best laid battle plans change the instant the first shot is fired.<br />
8. Be intimidated by no one. You did something that few people ever do, and even fewer stick with for any length of time, and if they don't understand that, they won't, and keep moving.<br />
9. You won't do it alone. After I brought on partners, it was amazing how quickly things came together and the right compliment of skillsets balanced one another. In our case my partners added deep financial expertise, deeper operational expertise, and legal expertise to my sales talents, and we were a well oiled machine and delegate better than any team I have worked with and for because we know who is best equipped to handle a situation in spite of our egos.<br />
10. If you start a company with the sole reason to make a shit ton of money, then almost every decision you make will be short sighted to that end. If you go into business for yourself because you like it, you will do something better than anything you know of, and is a natural extension of who you are, then the shit ton of money will follow and your decisions will be sound, thought out, and you decide what the right price for your efforts are, not a spreadsheet.<br />
<br />
Maybe this helps some of you get off the dime to do something, or keeps others from doing something they are not prepared to do. Either way, it's my experience, my opinions, and the next chapter has yet to be written.<br />
<br />
Check out the ByteGrid website, and take a look at our first data center we acquired. It's a Tier IV gem, and I will blog next about why we bought it, and why it is a fantastic facility. I will of course be biased, however, I backed up how solid it is with a lot of money-so I put my money where my mouth is too.<br />
<br />
If you want a copy of my data center site selection guide - it's still available. mmacauley at bytegrid. comUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-73947356483452327642011-04-13T07:47:00.000-07:002011-04-13T07:47:19.199-07:00The Evolution of Computing - Have we stopped dragging our knuckles?The innovation of intangible things dependent upon tangible things has been the model for computing as a whole, practically since the <a href="http://en.wikipedia.org/wiki/UNIVAC_I">Univac 1</a> hit the scene. Software - from word processing, to spreadsheets, to printer drivers, to web browsers, to java apps, and virtual machines - all intangible intellectual property innovation that was dependent on tangible things to be created, operate, and add value too. Key boards, green screens, motherboards, video cards, speakers, you get the idea.<br />
<br />
On thing that I have been spending a great deal of time thinking about is which side of the equation - the intangible or tangible - will be cannibalized first?<br />
<br />
We have seen the convergence most noticably in electronics. remember when you had to have a<a href="http://www.google.com/#q=turntable&hl=en&safe=off&prmd=ivnsr&source=univ&tbm=shop&tbo=u&sa=X&ei=yralTaWwCoOI0QGMyrnqCA&sqi=2&ved=0CG8QrQQ&bav=on.2,or.r_gc.r_pw.&fp=a65bd7ab2140114e"> turntable</a>, casette (reel to reel or 8-track), amplifier, and radio tuner to have a 'stereo' system? Then it was CD's and now the CD to mp3 (or other digital format) to just 'music' is happening. The battleground there is on the format which is a pissing contest that no one wins in my opinion. Why? Because there is someone who can think more creatively than the developer whose creativity was used up to create the format to make an interpreter/conversion intangible to make it all work.<br />
<br />
The intangible side of the equation has seen it share of cannibalization (aka M&A) where entire companies who created a software program or operating system are now simply features in a larger 'suite'. Think widgets, apps, and all of the other components that get stitched together to make a new intangible thing in the 'cloud' as an example.<br />
<br />
So if I think about all of the noise, innovation, branding, positioning, and other terms used to descibe the marketplace, two groups of tangibles are the veneer of all the intangibles - the source point and the end point.<br />
<br />
In essence I am starting to believe that the endpoint (mobile device, tv, tablet, other) and the sourcepoint (data center) will be the only things that matter in the market. The UI will be embedded in the endpoint, the sourcepoint is all zeroes and ones to get from the sourcepoint to endpoint. People (consumers/viwers) could care less about the OS, the application used under the covers, or anything else intangible. I think <a href="http://www.apple.com/">Apple</a> gets this (I am NOT an Apple fan at all BTW) better than most, however the sands are shifting and Apple is losing market share to Google daily. While Apple and Google have both built endpoints and massive scale data centers, Apple continues to offer a Blackbox of an OS, and other apps, and alienate Adobe and other platforms. Microsoft has tried this for years - and they don't own the endpoint and have some data centers for their Live apps but nothing as established as what Apple and Google have. And Microsoft loses market share every day as well - even on the desktop.<br />
<br />
So as the next step in the evolutionary process I can't help but think that things will converge - and in the process the perception of what is vital will change right along with it and distill down to data centers (sourcepoints) and devices (endpoints).<br />
<br />
Content will be the fuel for the convergence - and the innovations in the presentation of content will drive evolution at the endpoint. Because of this, the sourcepoints will be more reliant upon network density for cost effective distribution, with latency being the key variable. We are already seeing this with Facebook and Apple leasing and building sourcepoints, and with recent announcements by <a href="http://www.bloomberg.com/news/2011-04-07/dell-plans-to-spend-1-billion-over-two-years-on-data-center-expansion.html">Dell to build 10 new sourcepoints</a>, I will say that businesses who can own both the sourcepoint and the endpoit are making investments in owning both ends of the content consumption chain.<br />
<br />
Your thoughts?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-80406642293386708242011-03-15T13:52:00.000-07:002011-03-15T13:52:19.552-07:00The Seismic Shift of Data Center PlanningFriends of mine know that I have been beating on a drum for over 15 years about proper site selection for data centers. In fact I have recently blogged about it again and with all that is happening in Japan, it got me thinking even more about it. The other thing I have also spent more time thinking about is disaster recovery for obvious reasons.<br />
<br />
I will say it again - site selection is the most important feature of a data center. The 9.0 earthquake merely reinforces that in a big way. I would not want to be a data center owner/operator in silicon valley - 10 years ago or now. The two biggest reasons are:<br />
<br />
1. Seismic activity<br />
2. Logistics post disaster<br />
<br />
The first reason is more obvious today beacuse of the Japan earthquake. While well designed in and of themselves - they were not designed to withstand a 9.0. Logical oversight in my opinion. If the worst earthquake to date was 6, designing one for 2000 times worse seems logical. Until it happens. <br />
<br />
The second reason is even more impactful when we we look at not only the immediate aftermath of the earthquake but of the tsunami that hit right afterwards. I have seen about 30 minutes of videos and my layman observation is it is worse than a flood because the earthquake loosens everything and gives it mobility and when you add a wall of water to move debris around - and by debris I mean cars, buildings, and dumpsters - it clogs streets with several feet of debris to have to move to make pathways passable again. This means that people cannot get to or leave the data center, and fuel deliveries for generators are hampered, service vehicles cannot get to them and in some cases entry ways and exits are blocked at the data center itself.<br />
<br />
So what are some solutions?<br />
<br />
Take site selection seriously. Look at your disaster recovery plans - not which data center your customer data will fail over to - but the logistic issues you will be faced with in the event of a real disaster happening. Is there food and water for employees? How far away is the fuel for the generators? Are there multiple ways to get it there? What happens if public transit cannot operate? Can you bike or canoe your way there if necessary? What about the conduits carrying electricity and telecom to the site? Can they withstand a ground shift of 20 feet (Japan is 13 feet closer to the US after the earthquake). How far away are the power plants? What is their capability to provide service? How will they do this? What is the wind direction? If you are near railroad tracks, what is carried on them that is toxic or can close pathways to and from the facility? Who are your neighbors? Is there anything that can float or blow into your facility? Into the infrastructure yard?<br />
<br />
Things to avoid - don't base a decision on how quick you can get in and out for a site tour. Don't think about how easy it is to get to and from the office/data center, look around the facility at the roads, the access points, railroad tracks (especially active ones), flight paths (remember 9/11 when they grounded all planes and sent those in the air to the nearest airport?), and don't think something bad won't happen because it never has. <br />
<br />
<a href="mailto:bytegrid@gmail.com">bytegrid@gmail.com</a> is how to reach me.<br />
<br />
Ask the right questions and investUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-67895748578786466082011-02-01T07:59:00.000-08:002011-02-01T07:59:11.201-08:00Green Dtata Center Retrofit Financing Available - Game changer?I have been spending some time, ok a lot of time, looking at why companies are not taking advantage of the various rebates and incentives for making data centers more energy efficient. There are so many programs offered by utilities, state and local Governments, and the green energy marketplace I wanted to figure out why more projects weren't getting off the ground. What I found can be summed up in a word - money.<br />
<br />
Without exception, access to financing was the number one reason projects never got started. Companies would engage utilities, energy audit firms, constuction firms and other professionals to examine their current energy usage so that they can establish a baseline of what their energy usage profile is today. Many of these services are provided free of charge, and some firms charge for their services.<br />
<br />
Once the basline was established through a thorough energy audit, a project plan was drafted detailing the equipment that could be installed to harvest energy savings. In most cases I found that the savings were substatial - 30-50% on average. In real numbers, if a data center operator spends $200,000 per month on their electricity, they would pay $100,000-140,000 per month with the new equipment, saving upwards of $1M per year. This is a no brainer, right?<br />
<br />
Unfortunately it is not a no brainer and here is why: The new equipment to harvest the savings is expensive and goes on balance sheet. so if a company has an extra million dollars or two to spend, will it be spent on a data center upgrade or retrofit or on expanding the business? Most companies opt for not spending the capex to harvest the savings or only spend money on very short term ROI projects and only take a fraction of the savings back.<br />
<br />
So once I figured out that the issue wasn't a desire to do these things (who wants to be known as the company that doesn't support green initiatives), or that they didn't know where to start or what to do - after all they had the audit and project plan in their hands - I focused my attention on solving the problem - where can I find or put together a financing solution that solves the problems?<br />
<br />
I would love to tell you that I was smart enough to know exactly who to call to solve the problem - but as many of you know me, know that I am not that smart, but I am lucky in the sense that I am not afraid to pick up the phone and ask a lot of questions and learn what I don't know. So I learned a lot and then got lucky when a colleague in the financial industry called me and said 'I think I have something'.<br />
<br />
Something indeed. <br />
<br />
We have put together a financing program for companies who want to go green, harvest energy savings, and eliminate the capex up to 100% for the equipment., installation, and maintenance services. The beauty of the program is that the savings are dovetailed into the financing so the owner/operator is never out of pocket for the savings because if the savings are 30% per year and the cost of financing amounts to 15% per year, not only does the company get the benefit of the cost savings, they can structure it so that they add cushion to their monthly opex. <br />
<br />
Of course if a company has money in their budget for capex purchases they can spend that allocation and finance the rest and do even better. So what's the catch?<br />
<br />
At a high level here are the conditions that must be met:<br />
<br />
1. Project sizes need to be in the $5M-100M range, based on an energy audit that has been done (or will be done)<br />
2. The Owner operator must have investment grade credit (BBB or better)<br />
3. The owner operator must be willing to sign a new maintenance agreeement on the equipment so the lender knows the new equipment will be taken care of<br />
<br />
The whole process - from the energy audit to project start is about six weeks - or less.<br />
<br />
The program right now is geared towards the large data center users for obvious reasons - they use a lot of electricity, a lot of it inefficiently, and the greatest improvement can be made with these users. Rest assured that I am working on options for smaller companies with smaller footprints so stay tuned. The other benefit to this program is it is specific to the savings - any tax breaks or additional rebates are kept by the owner operator and not factored into the program.<br />
<br />
If you want more information on this program please email <a href="mailto:pelagicadvisors@gmail.com">Pelagic Advisors</a> and they will reach out.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-38414576108391885272011-01-10T10:18:00.000-08:002011-01-10T10:18:16.830-08:00More on Site Selection - Where is the Holistic View?I got a requirement sent to me on Friday afternoon for a site selection on a large requirement >50Mw. Based on what I know about users looking for sites in the marketplace I would bet an Egg Nog Latte that it was the requirement for the Social Security Administration's data center project that has gone sideways.<br />
<br />
The Office of the Inspector General OIG issued a<a href="http://www.datacenterdynamics.com/focus/archive/2010/04/audit-us-social-security-administrations-data-center-site-selection-flawed"> report back in April of 2010</a> about the criteria and process that put a smile on my face:<br />
<br />
<blockquote>"While the reviewers concluded that the agency had developed a "highly sophisticated" list of site-selection criteria for the project expected to cost more than $800 million, they found the process for narrowing site properties down to a short list to have been problematic.<br />
<br />
Mandatory selection criteria the agency developed could also have excluded too many locations.<br />
<br />
"In particular, when developing the mandatory selection criteria, it does not appear that consideration was given to the serious fiscal impact that exclusions would have in the electrical power cost arena over the life cycle of the data center," summary of the inspector general's report read.<br />
Information was also limited in evaluation of telecommunications criteria.<br />
<br />
It was not clear from the report's summary whether one or several location had been identified. The agency did not release the document in its entirety.</blockquote><br />
Why is this humorous to me? <br />
<br />
The things that were excluded - cost of power and telecom infrastructure were cited. Hello? The two most important criteria for a selection were overlooked? Wow. Who was doing the selecting? Was the 'highly sophisticated' criteria designed to mask breathtaking incompetence?<br />
<br />
As a result the requirement (if it is the requirement) is back on the street for round 2. If you are involved with the site selection process for the SSA (or any other organization) I am happy to give you my site selection criteria checklist. It's simple, I have used it myself for years, and it works. Email me - <a href="mailto:bytegrid@gmail.com">bytegrid@gmail.com</a> and I will send you a copy. Free. Yes free. I look at it as an investment of goodwill to keep my taxes down paying for a project that is big, expensive, and uses MY money to pay for it.<br />
<br />
The other interesting thing to have noted in the requirement which I laughed out loud over was that Economic Incentives were weighted as high as power and telecom infrastructure in the selection criteria. Folks, Economic incentives go away after a few years. Data centers stay around for 15-30. Economic incentives help in the short term, but how many elected officials stay in office 15-30 years? That is a lot of election cycles and political agendas to factor in.<br />
<br />
There is also the mandate from Vivek Kundra issued in February of 2010 that they will need to factor into their decision. This entails highly efficient, green powered facilities that will be flexible enough to handle the IT changes in virtualization, the inclusion of legacy applications and hardware, and CIO agendas. It's a tall order, however it's so damn simple when you focus on the lowest common denominators. The same ones they overlooked in the first requirement.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-38189782920842791392011-01-05T10:26:00.000-08:002011-01-06T10:59:29.035-08:00The Modular Data Center Choices - half baked?Anyone who knows me or follows this blog at all knows that I have been a strong proponent of modular solutions for many years - two years in fact when I first <a href="http://virtualizationstuff.blogspot.com/2009/01/600-trillion-bytes-per-box-roi-of-pod.html">blogged about container ROI</a> on 1/8/09 using the IRS as the example. Since then many solutions have been designed, some deployed, and all of them rarely discussed. Still scratching my head on that one - I would want to do business with a company that is delivering the most cost effective and carbon footprint lowering solutions available. <br />
<br />
Today, there was an announcement over at Reuters about another <a href="http://www.reuters.com/article/idUS82811625520110104">Data Center Anywhere offering</a> from <a href="http://www.telehouse.com/">Telehouse</a> which we can add to <a href="http://www.leetechnologies.com/">Lee Technologies</a>, <a href="http://www.iodatacenters.com/">i/o Data Centers</a>, <a href="http://www.bladeroom.com/">BladeRoom</a> and others. I have looked at all of them, talked with their Management teams, reviewed deployment documents, seen costs, proposals, the whole nine yards on these 'solutions'. You know what they are (data center anywhere solutions) half baked. If that. <br />
<br />
The smart companies (Lee, BladeRoom, HP in my opinion) have figured out that while they may not be in the real estate development business they work with those who are.<br />
<br />
Without exception each DCA 'solution' has a 'Batteries not Included' clause. In other words - they are as awesome as a Ferrari with no fuel system. It absolutely STUNS me that in this day and age of partners, specialization of product and service, and the fact that the modern data center buiness is approaching its 30th birthday that no one has put ALL of the pieces together into a TRUE solution. A true solution is a cup of coffee, not the possibility of opening your own Starbucks.<br />
<br />
Since I hate people who bitch and don't provide solutions so here is my two cents on a TRUE solution:<br />
<br />
Modular manufacturers need to find and work with owner operators of data centers to find and make ready sites that are 'modular ready'. It's not like a modular solution isn't a data center with the same lowest common denominators - power & fiber.<br />
<br />
I know what you're thinking - <br />
<br />
That data center companies are reluctant to embrace the modular solutions (they have inefficient buildings to fill)<br />
<br />
I am a manufacturer not a real estate guy (you aren't selling many units without someplace with power & fiber to put them are you?)<br />
<br />
The market is not mature (You haven't done your homework)<br />
<br />
So what has happened is that customers are tasked with filling in these gaps themselves and the reason they are coming to you, the manufacturer, is to buy a data center SOLUTION, not a piece of one.<br />
<br />
Most end customers are even less equipped to handle these filling of gaps than the people allegedly in the (modular) data center business and as a result the solutions havn't been flying off the assembly line. If you want a full solution - ping me - <a href="mailto:bytegrid@gmail.com">bytegrid@gmail.com</a> - I have done my homework and will get you your cup of coffee...Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-146297987066246402010-12-20T13:35:00.000-08:002010-12-20T13:35:10.618-08:00Site Selection - A Case Study?I received a call from a friend of mine earlier today who called to ask if I had seen the data center requirement posted for the CIA. I had heard about it and got to thinking about what they would do to kick off the site selection process. Then I realized that if the requirements were posted, they must have done a lot of homework already. You would think anyway. So I thought I would brainstorm here on my blog and take you through my thought process. and then see how much of what I think about are in the posted requirements.<br />
<br />
I will qualify this blog post with a disclaimer - the only specs I know are that they want a 200,000 square foot facility built out in 40,000 square foot chunks/phases. I have not read any document or article related to it.<br />
<br />
So when I look at the requirement as I understand it - my high level criteria would be:<br />
<br />
1. Available inexpensive power, ideally with a green power source that is off grid<br />
2. Available network connections to Government TIC (Trusted Internet Connection) sites<br />
3. Proximity to US military bases to insure that staff can get to a facility if needed<br />
4. Risk profile for natural disasters, man made disasters (civil unrest/planes into buildings, etc), financial condition of location States, geologic topography, and political risk.<br />
<br />
So for #1, availability of cheap power and preference to a green power source that is off grid is in the top slot for a reason. Data centers number one expense is power, and data centers are typically operated for 15+ years. Virtualization, while reducing floor space actually increases density and draw of power for more powerful servers, and the power needs to be 'green' per the mandate by Vivek Kundra, the CIO for the United States.<br />
<br />
To my knowledge there are two sites that COULD satisfy this requirement today - but it would take a signed contract to mobilze the funds and people to construct the power systems, and one would get knocked out of the running because of proximity to DC proper. A box of anthrax, or a suitcase dirty bomb with nuclear waste within 40 miles would make it 'inoperable' at least on the surface. So this isn't a data center requirment, it's a power plant with one customer - a data center.<br />
<br />
On to #2, which deals with network connectivity, and not just in the general sense but specific to a TIC site. There are 100 of them in the US, so that limits things too if that is a dealbreaker - and it should be. Data needs to flow to the facility and out of the facility to provide credible intelligence to our Government and to other Governments friendly with the United States. Since we arent talking DSL pipes, these need to be 100GB pipes or better. Redundant too. This will be expensive since there is not a lot of fiber in the boonies - I know, I live in 'the boonies' (kind of).<br />
<br />
Number 3 is important because in the event of some really bad shit going down on a major scale, people need to get in and out of the facility no matter what. The ability to use runways and other infrastructure specific to logistics is crucial. People can fly to a base and get choppered in, HUM-V'ed in or some combination of planes and automobiles. Sorry trains. There is also the 'able to sleep at night' piece having jets and Blackhawks able to scramble and be airborne in seconds to sanitize any threat if needed. <br />
<br />
Number 4 should be a given, and arguably #1. When I think about Ashburn VA and the amount of data that is captured, processed and stored at the end of a runway is breathtaking oversight in my opinion. Knowing I can get mobile network reception on the approach to Dulles means that people bent on harming the United States and its citizens can do major harm sitting in Verizon's parking lot and pressing send. Katrina got everyone's attention with natural disasters on a major scale, but what about wildfires that close roads, burn telephone poles, and melt insulation around copper lines? Ice storms that make roads impassable and cause tree branches to cut power and telecommunication lines or the earthquake that hits and while the seismically engineered building hardly feels anything, the 60 miles of conduit housing telecom fiber gets severed by a bridge collapsing or ground shaking separation of the conduit itself? Topography needs to be factored in as well for redundant microwave links, sensors for all sorts of data needing to be captured, analyzed and used in making educated decisions? <br />
<br />
I added a vector that has not been too much of an issue to date but one I think about - the financial condition of a State. I will use California as the example - the State is teetering on bankruptcy if you believe the mainstreet media outlets. The issue won't be whether or not the State can afford to keep the power plants operating, but the civil unrest that occurs when people get incredibly pissed off. Mobs like to burn things, flip over cars, and do other things that make no sense to me. Looting happens. If there is no water or electricity all kinds of crazy things can happen. Guess what? Data centers plan to have water and electricity no matter what, making them a target.<br />
<br />
The point in all of this, is that before you even start touring facilities, virtualizing, seeing who is out there, and putting together requirements based on square feet and phases, you better have done your homework, or you - CIA data center - will be the next disaster to recover from.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-16160477392962373652010-12-07T07:05:00.000-08:002010-12-07T07:05:00.815-08:00Pause for new thought streamI was just reading the tweets from the Gartner DC event in Las Vegas and had a random thought to solicit feedback on:<br />
<br />
Is the term managed storage really just data management with a new coat of paint? <br />
<br />
Discuss...Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1303726713094681630.post-86209498295155781082010-12-06T20:10:00.000-08:002010-12-06T20:10:48.300-08:00Site Selection - Part Two of a Multipart Series - Modular = What changes?First off thanks to all of you who reached out and sent comments on the first part of the series about how Site Selection is often overlooked. Some of the feedback inspired me to stretch this into a multipart series, and so now I will delve into another favorite topic <a href="http://virtualizationstuff.blogspot.com/2009/04/data-center-containers-black-licorice.html">I began writing about in April 2009</a> - modular and containerized options and for the purposes of this entry, how they factor into a site selection.<br />
<br />
Modular data centers from companies like Lee Technologies, in the US and BladeRoom based in the UK, are coming into their own and being deployed at a pretty good clip. I think data center people and investors alike are starting to get it as to why these options are compelling:<br />
<br />
1. Cost. I can deploy a facility in a quarter for half the cost. That means that I get what I want whether it's density, footprint, speed of capacity, or operational efficiency for a lot less than a traditional data center Suite. As an owner operator it means I don't overbuild, I don't pay for a 100,000 square foot facility to be built and then wait until ten 10,000 sqaure foot rooms are sold. Cha-Ching all around<br />
<br />
2. Speed. I can get them deployed quickly - usually in less time than it takes my broker to fly in a team to look at several facilities in several markets and tell me 'Stop me when you see something you like'. Thousands of cores, petabytes of space in 3 months. Me likey.<br />
<br />
3. Consistency of product. Can anyone point me to a data center company that has the exact same layout, generator brand, UPS equipment, switchgear, masonry, or even layout in their facilities? Me either. Imagine the money and time saved supporting the same make and model of data center and its components? Modular solutions are the Southwest Airlines of the data center business. Southwest only flies Boeing 737's. Why? Because any pilot and flight crew can work on any plane in the fleet. Makes sense for data centers as well.<br />
<br />
4. Simplicity. If I can get the same data center solution in multiple places from a single vendor with a single contract, with terms and conditions the same why wouldn't I make that my first choice? I have experienced first hand getting a phone call from a customer whose main reason for calling me was because we had a contract with them and they needed space fast and didnt have 6 weeks to work with a competitor to hammer out a different contract.<br />
<br />
5. Logistics - How do you find an insured and bonded mover? There are millions of dollars invested in the cargo so you want to choose wisely. You will also need to pay attention to overpass/underpass heights since your solution will likely arrive by truck. You will also want to have an address, since many deliveries will be handled by out of state/non-local firms so giving directions to old barns or favorite fishing holes won't likely cut it.<br />
<br />
6.<span lang="EN">Medium voltage electricians: Container/modular solutions by in large require more medium voltage electricians than low voltage so having local contractors with those skill sets are a factor. (Chicago has thousands, some remote areas may have 2). Also you will want to factor in the Unions and whether or not you are in a right to work or union state.</span><br />
So all of this is great, but how does site selection tie into all of this?<br />
<br />
Well modular solutions change the model for data center companies and companies looking to go modular.<br />
<br />
For the data center company, it trips them up. Why? If you just paid $100M for a big 100,000 square foot building with no land and 5 mw of power, you could take a container or modular solution and gobble up that power footprint inside your building in a heartbeat, and still have 90,000 square feet of expensive building you can't do anything with until you spend millions more to get more power and have to wait 18 months to get it. Or turn it into a raquetball court for employees I suppose. It impacts site selection for owner operators because they don't have to find big buildings to turn into data centers or build big buildings to chop up into computer rooms.<br />
<br />
It also opens the door to a smarter model. One where they can buy land, do inexpensive improvements and pour concrete pads while they are working with the client to finish off the design of their facility. The data center is built, tested and shipped to the site where it is assembled in a couple of days and commisioned and ready for gear. The biggest issue is that there is no vendor that has emerged as a modular centric data center operator. X/O is the closest in supporting containers, but that's all I know of and the solution is far from complete. Today's owner operators will need to amortize their real estate and free up cash to make the switch and augment what they are doing, but that is a seismic shift in thinking, operations, and development, that I think a modular centric company would do better.<br />
<br />
For end customers/tenants the site selection has been an issue for containers and modular solutions as well. Do we have land? Is it zoned? Is there power? Is it reliable? Is it in flight paths? Could a Waste Management dumpster truck get confused and pull our container away? How much is a generator? How many do we need? Can we even put diesel on site in a tank? Who will design the modular solution? What's a good UPS? Who understands power distribution and building codes? Where do we plug it in? Does it have a plug?<br />
<br />
These are questions I have fielded or helped answer that past two years. many are funny today, but all legit.<br />
<br />
Many companies who build and run their own facilities do have the people to figure it out, and Lee Technologies and BladeRoom both have designers to help get it right. The issue is still - what makes a good site? Send an email to me at <a href="mailto:bytegrid@gmail.com">bytegrid@gmail.com</a> and I will send you my site selection guide which lays out all the questions you'll want to ask no matter which way you go.<br />
<br />
Special thanks to Steve Manos at Lee Technologies for his contribution in this blog post. Steve can be reached at smanos@leetechnologies.comUnknownnoreply@blogger.com0