Lawyer Bait

The views expressed herein solely represent the author’s personal views and opinions and not of anyone else - person or organization.

Thursday, December 31, 2009

Buh Bye 2009

If your year was like mine, then you are glad to see it in the rear view mirror. As for 2010, are we there yet? Well I guess I have a few hours...

I am not going to pick scabs and do a year in review. The past is just that. What I will throw out there are my wishes and hopes for 2010, as it relates to technology that I care about.

I hope that the noise about the cloud fades and that companies realize the cloud is a lot like the point of the Wizard of Oz - we had *it* all along, we just didn't know it.

I hope that companies start broadly consuming technology again as innovations come to market or proliferate themselves. Back ups, DR, the stuff that always seems not important until you need it.

I hope that more companies embrace containerized data centers in 2010. You can't beat them for so many reasons and anyone who says differently isn't drinking their own champagne - especially if that champagne is made with the grapes of efficiency, density, optimzed computing, or anything green.

I hope that the vision of transparency in Government data continues to proliferate so we get a sense, a real sense, of how messed up things are and more importantly, the fire in the belly to make deliberate changes in technology and process to regain a sense of what is right in the world.

Predictions:

There will be at least two high profile data breaches the first 6 months in the US, that will have global implications. Those that want to harm us in the US aren't taking 2010 off.

Cloud computing will get more secure, but will not be adopted en masse by private companies. They will continue to want a private cloud, which in its purest form is just a new billing and accounting system for underutilized IT stuff.

The data center inventory of space and power will continue to see increasing supply and demand laws put pressure on prices, lack of available inventory, and those with inventory will reap the rewards for the next 24-36 months.

There will be a major shift in where new data center inventory will be deployed. It will shift to inexpensive power markets, as deployments get denser and cost per Kw, and metered power in data centers becomes one of the top 2 decision points for where to put gear.

To my friends - I predict that 2010 will usher in stronger friendships, better opportunities, and reaffirm what I already know - that those I call 'friend' are clearly the best on the planet, and I am honored and humbled to know you.

Let's roll....

Tuesday, December 15, 2009

Verari Systems - The next Phoenix?

I got a bunch of calls yesterday about Verari Systems announcement that they were out of business. Long story short - they are and they will be back.

The story is that the lead investor turned down the latest term sheet and in so doing sealed the fate of the company going into what amounts to chapter 11 (technically it is an ABC). So instead of protecting the investment, they blinked first and got a goose egg instead.

From the conversations I had it sounds like the company was not managed well and as a result, they tanked. What is also apparent is that there is still a backlog of business, still solid technology in place, as well as a plan to focus on higher margin business and refocus efforts with a slimmed down team ready to execute without the albatross of former bad decisions to get in the way.

This is one company who I believe will rise from the ashes.

Saturday, December 12, 2009

R.I.P Verari Systems...

The site is still up, but that's about it. Verari Systems is headed into liquidation early in 2010. If you need a container play, submit a bid. I think jumping into the server/storage game may not have been such a hot idea for them, but whoever picks them up for their container technology will get a screaming deal.

Monday, December 7, 2009

I thought a niche was a small market...

Containerized data centers remain niche players according to IBM...

I got the heads up on this article from my Google Alerts and it was a head scratcher. IBM, who makes containers, was saying that containerized data centers are still a niche play. Steve Sams, VP of site and facilities for IBM, said containerized data centers are not for everyone and won't be for a long time.

"Most major companies conclude that data center containers are not for them, and I think they're right," he said. "I never think it's going to be a huge market."

Is IBM trying to sell these, or is this a ploy a la Sun Microsystems to sell more servers in a bigger box?

He goes on to say - "But Sams said that in general, he doesn't think containers are "the best way to build 320 square feet of data center."

"Ninety-nine percent of our customers are never going to install thousands of servers at a time," he said.

Someone in marketing at IBM may be to differ. I know I do.

Why would an organization NOT want to spend less money on delivering its IT resources? I can't name one company I have spoken to in the past 18 months that says 'We don't care what it costs, we'll do things the most expensive and inefficient way possible.'

The biggest issue I see with container adoption is that there are few places to plug and play. Microsoft and Google and Amazon are single tenant operations with custom built facilities to support their compute and storage infrastructure. Ninety nine percent of companies who buy IT gear don't have their budgets and their expertise to do this on their own.

Couple the budget issue with the fact that traditional data center operators do not have capital to build on spec, and when the container manufacturers are touting that customers can have 4000 square feet of data center deployable - including servers, OS, and racks - in 8 weeks, data center operators can't build out their infrstaructure fast enough to give container buyers a place to land them.

So a company must then navigate facilities issues, zoning and permitting, siting, redundancy, in short - it's a new data center project for their data center in a box.

I personally believe that when there is a vendor agnostic place to plug them in, it will increase data center container sales. Who in their right mind wouldn't want a PUE of les than 1.3, breathtaking capacity ordered, installed, tested, and delivered in 8 weeks, a quickly depreciable asset, and one that can be leased 100% with tech refreshes built in?

IBM must have quite a list if only 1% of their cutomers thinks that greener, more efficient, highly dense, and fiscally responsible solution is a way to go.

Thursday, November 12, 2009

The HP-3Com deal and what it reinforces...

The HP and 3Com deal reinforces something Scott McNealy said 10 years ago - 'The Network is the Computer' although with today's hype cycles I would replace 'computer' with 'cloud'.

I am working on a series that will be at least two parts, maybe three, on the single most overlooked piece of the cloud - and it's not Security.

Long story short the HP/3Com deal is about the network. Say what you will, HP is moving into the network hardware business and Cisco is moving into the computer hardware business.

My question is does Cisco buy Blade Network Technologies or Verari first?

Time will tell...

Wednesday, October 14, 2009

Move along, nothing to see here...

It is being reported that Sidekick data has been recovered/restored, yet I have yet to see a forensic breakdown of what happened to get it back.

I was perusing a pice this morning at SYS-CON's site - http://wireless.sys-con.com/node/1143427 that discusses the 'lessons learned'.

#1 is The cloud is not redundant. Um, duh. As I pointed out in my last post, you cannot expect technology to do something it is not aware of (like backups, setting restore points, etc). Assuming that the cloud is anything but computing and storage resources made available to rent, is foolish. You still need to design how the cloud will work for you.

#2 Know your vendor - I don't know about you, but when I think mobile phone for $100, I am not thinking about the systems used to support the phone, the network, or anything else. I expect that I will be able to use the device and the network in a reliable fashion. I do not expect my vendors to cover my ass when it comes to backing up stuff that is important. If it's important, I back it up, If not, I am prepared to lose it, even if I lose it beacuse of laziness.

Anyway you get the picture. The cloud is not the Government that will magically develop processes and solutions to make up for our lack of discipline, awareness, or any other character fault.

Take responsibility and if It's important, make a copy. If it's really important, make two copies...

Monday, October 12, 2009

Danger - Sidekick data gone...

Microsoft stresses that it wasn't its own technology to blame in the Sidekick data loss, but rather Danger's technology, which the Redmond company inherited when it acquired Danger in 2008 for $500 million.

However, the embarrassment for Microsoft comes as there is no apparent backup of Sidekick users' data, according to a report from HipTop3. It is also unclear whether Microsoft will be able to recover any of the lost customers' data.

Bu the unfortunate coincidence is Microsoft's launch of Windows Mobile 6.5 devices last week, which in association with this weekend's Sidekick data loss could translate into reduced customer trust from potential Windows Phone buyers.

*******
This event got me thinking - Is the Cloud only as strong as its weakest link?

In this day and age of Cloud computing, value of data, who owns it, what we give up as users, back ups, failover, etc. etc. It blows my mind that this stuff happens. It kind of reminds me of a time several years ago when data breaches were making headlines, and the rush was on for DR solutions. And security solutions. And privacy solutions. And the list goes on. And here we are. Again.

Whether or not the Sidekick user data was in a cloud or not, if it moves to the cloud 100% then does this issue go away? What do we gain? What do we lose?

Even with the promise of cloud computing and the movement of (and one could assume) copying of data in multiple places, what happens when something fails? Do We fail like we are falling down an escaltor or an elevator shaft? (Jim Heaton quote)

Is it data? Is it process? Where is the root cause? Sounds like in this case it was 'technology'. Not sure what that means, but I will bet that the corrective action is now a funded project that was once an afterthought and maybe an outright gamble.

In the spirit of full disclosure, I am a T-Mobile customer (and have been for 6 years) and I use a Blackberry. I keep some data backed up in multiple places, some data not at all. Makes me wonder if I have calibrated the value of my data correctly. Have we all? Hmmmm.

Tuesday, September 22, 2009

'We are using cloud' = WTF does that mean?



I finally had an hour to catch up on some reading and it struck me that cloud computing - on the surface - hadn't changed much. People are still talking about cloud startegy, cloud rollouts, virtualization is the vapor of the cloud, etc. etc. etc.

I still ask the question - So What?

Cloud is a concept that is desperately trying to find a product category and an accounting method to charge for it. When I hear something to the effect of 'We are rolling out a cloud computing platform' or 'We are adopting Cloud computing' I cant help but think - So What? I have a huge crate of Lego blocks. Same difference.

Legos can be used to build really cool things. From incredibly complex to a simple collections of blocks to form a cube. Same with Cloud Computing. The Cloud is a bunch of stuff in a data center. You can roll out something incredibly complex and proprietary, or a simple massive storage application to put bytes into. Your call.

I recently had the priviledge of working with NASA to deploy a cloud stack. So what? Great question!

NASA wanted a simple, ubiquitous, adoptable, stack of applications for two things - raw compute and storage. That's it - two specific problems adressed in one stack of apps that would be made available to the organization. It is named NEBULA.

The interesting thing to me with NEBULA, is that by default in the way the stack was knitted together, they get three benefits out of one stack:

Infrastructure as a Service: Quite simply, IaaS is the delivery of computer infrastructure as a service. Instead of buying servers, software, data center space or network equipment, clients purchase necessary resources as a fully outsourced service. Nebula users can think of this as an evolution of web hosting and virtual private server offerings.

Platform as a Service: Nebula’s PaaS functionality facilitates the deployment of applications without the cost and complexity of buying and managing numerous hardware and software layers. Services are provisioned as an integrated solution over the web. No software downloads or installations are necessary to realize full computing capabilities.

Software as a Service: The SaaS functionality of the Nebula Cloud includes typical moderation workflows, terms of service, and several levels of basic policy compliance, security, and software assurance. Users desiring to utilize the underlying Nebula components directly will be required to pass the necessary security reviews, content reviews and legal certifications themselves.



In my next blog post, I will cover the issues that arise when it's successful...

Wednesday, September 2, 2009

Containers/PODS/Data Center in a box - the ultimate hedge?

There has been a ton of stuff going on in the world of technology this summer and I thought it was about time I shared the sum of my discussions this summer and see what other folks are thinking about.

I just spoke to a good friend of mine who is headed to work at Stratus Technologies - the ultra high availability server company and we were chatting about the electricity issues companies are faced with as they virtualize their infrastructure.

Cabinet draws increase when you virtualize - especially with blades - which means electricity USE increases as well. If you use and need more of anything than what you have, you will have a shortage unless you get more. Duh. You can have a 100,000 square foot facility but if it has 5MW available and your new virtualized design requires 10MW, even though you can now put the entire data center into 100 cabinets - doesn't matter the size of the glass my friend - when you take the last sip you take the last sip.

Other topics that have been fun to watch...

Markey-Waxman bill HR2454 that got passed in the House and is up for debate in the Senate next session. Talking about taxing carbon footprints of companies. I am sure taxing carbon profiles of humans is around the corner - especially when they see the tab for Social Security, but I digress...

The Waxman-Markey bill is 1400 pages. I have read a fraction of it. It lays out a cap and trade program whereby companies get an allocation of carbon they can emit, and anything over that they pay a HUGE tax on, or go out to a soon to be created public market (Goldman Sachs is the lead developer) and buy other companies' carbon surplus.

So if I am a data center operator and I like coal and diesel generated electricity because it is reliable and cheap but dirtier than a mud wrestling prostitute, then I will get a hefty tax bill because I emit more carbon than the government thinks I should. OR if I can find an organic, wind, methane, gerbil wheel generating source of electricity data center that emits a tablespoon of carbon each year, I can cut a deal with them to buy their surplus of carbon allowance and keep right on chugging with my diesel and coal.

This, like many other government programs is rearranging deck chairs on the Titanic - there is no real change to the outcome, but we can feel good about appearing to have addressed the outcome.

The other side of this is the cap program. Reduce your carbon footprint period. No trading credits/surplus/etc. reduce your emissions or we will tax you out of business. This has legs, in my opinion and speaks to what is good business, will drive innovation, and help the East Coast and Europeans have cleaner air. If we can get China and India to help out - sweet!

The other discussion I had more than a few times was around data center inventory and the lack of it. It has been leading to higher prices in a down economy, the fact that there is little inventory in the wholesale space - 20,000 square feet and up, and that there will be a tightness felt over the next few years until debt markets can open up, get financing flowing for new projects and space becomes available.

The last discussion area that has heated up even more in the past 60 days has been around containers. There is a LOT more activity out there as people come to realize how slick this solution is, and why it's a hedge against carbon tax (efficiency goes up with power consumption), you can get them quick, so when budgets open up, you can wait to buy as long as possible and get another few dozen petabytes of storage or a few thousand cores up and running in 2 months, and if you pl,ug them into green power and grey water - well, now you're talking extra credit.

The one thing that I am watching closely is based on a question I heard on a recent call - 'Whay are these container deployments fire drills every time?' to which I responded, 'Because they can be deployed faster than a traditional data center EVERY time. Customers know this, it's the data center companies that need to step it up and have inventory ready or the process in place to deploy it more quickly than they are used to.

There is also the issue of zoning too - cities and towns don't know what to make of the containers, especially the ones outside. They see them as 'occupied temporary structures'. To me that definition is a refrigerator box with a schizophrenic uncle and his three imaginary friends in it, not a standard container that there are several thousand of in Newark and Long Beach. Aside from the fact that they are 'occupied' maybe 2 days a year, if that.

Anyway, containers are getting much more street cred and are the ultimate hedge against taxes, portability, density, with the upside of efficiency gains and fast depreciation.

Tuesday, August 25, 2009

Wow. I just stopped my Blackberry while driving antics

VERY graphic video of a texting while driving PSA from the UK.

It got my attention and I hope it gets yours too. As busy as I think I am, I am not this busy...

Friday, July 24, 2009

The Politics of Carbon

I have read two articles this week that point to the same thing from different angles - The Politics of Carbon.

The first article I read - The Green Gotcha - was written by Michael Bullock for his blog at CIO.com. In this article he lays out something that I believe to be true -

The US Government will start taxing those organizations who do not contribute to reducing carbon emissions in their data centers. It is called Cap and Trade. What will likely happen is that data center operators will be taxed on their emissions. It is anyone's guess as to how this will happen but rest assured the US Government needs revenue and this is low hanging fruit. It is feel good legislation - we can encourage/force businesses to be green and if they choose not to - we'll make money off of them in the form of tax revenue.

It also targets a big business segment that is vital to worldwide communications, commerce, and national security. It is also a business that is capital intensive, meaning it is not easy to just change operational models and equipment and go green - the stuff that makes a data center run is expensive and in the capital markets of today - it won't be easy for a company to just go green, or for utilities to just stop producing electricity generated by coal to serve their customer bases.

So what I believe will happen is that data center operators and their customers will need to adapt and 'go green'. This can take the form of utilizing wind power and/or striving for the lowest PUE available to mitigate a tax liability that will no doubt be substantial.

Those that can't and don't will be hit with tax burdens that will no doubt be painful - at least in the short term - because at the end of the day it's customers that will foot the bill in one way or another.

Michael Manos has ecoed what has been said for months - that we as an industry need to have a say in how this cap and trade will go into effect. Legislators who know nothing about data centers, electric utilities, and the way the business of data centers works will wind up telling the entire utility and IT community and industry how we ought to be doing things.

The second article - in the Eithiopian Review says:

The federal government is very unlikely to issue strict green regulations related to data centers... The current administration is very technology-savvy — after all, the current Secretary of Energy Steven Chu was recently the director of the Lawrence Berkeley National Laboratory, whose work was heavily dependent on its data center. Chu did some great work related to Green IT when at the labs. He knows what can and can't be done — and will make sure that data centers aren't hamstrung with unnecessary regulation

If there is tax revenue potential in these regulations (and there is or the Government could care less) then you better believe they will hamstring anyone who can put money into coffers. Besides, the LBNL is funded by the Department of Energy - that takes money. Money that will no doubt come from data center operators (and others).

I for one am not for someone who knows nothing about how an industry works (Legislators) telling me how it ought to be run. Well unless the data center industry and the companies who build and maintain the internet want to be the next GM, Citi, or _______________. That worked out pretty well. NOT!

We do need to form a cohesive trade group and ultimately form a lobby that sets up a PAC or two to insure our interests are protected. If we do not do this as an industry, Our Senators and Congressmen will be running data centers.

I don't think that's such a good idea. Do you?

Monday, July 6, 2009

Virtualization = Data Center Efficiency? Not so fast...

There is still a fair amount of buzz out there regarding Virtualization and its contribution to efficiency and greening of a data center. There seems to be so much buzz (aka noise) that one could think - Wow if I virtualize, I get better PUE's, and I am green too! Awesome!

Well, kinda...

When you get rolling on the Virtualization bandwagon, a few things start to happen:

1. Your footprint shrinks, freeing up space and cabinets
2. You now have something cool and useful to figure out
3. You can maximize densities of servers/cores like never before
4. You can actually deploy a DR environment without doubling size and physical deployment

There are a few other things that you'll realize too:

1. Your power draw almost doubles - cha ching!
2. Many data center companies don't like the high density cabinets
3. You still have to patch VM's - smaller footprint doesn't mean less work
4. Unless your power is green, then you haven't done much to put a dent in the coal fired electricity plant you get electricity from.
5. If you draw less power, you generate less heat, and the data center is LESS efficient in this state

Be aware, be knowledgable. Virtualize, but make sure you set correct expectations because while those of us in technology understand the benefits of Virtualizing, the CFO looking at an electric bill that doubled may take some extra time to get it...

Wednesday, July 1, 2009

New project I am involved with...

This is about the Good Men Project. It's very different from the beers and boobs stuff that's out there. I can only hope it becomes as popular as Postsecret...

http://www.goodmenbook.org/about-the-book.html

The Good Men Project is an anthology of essays about what it means to be a man in America today. All proceeds from the book will benefit The Good Men Foundation, a charitable organization founded to support men and boys at risk.

Monday, June 29, 2009

Cloud Computing = Accounting Exercise?

Dude. Seriously.

With all of the talk about Cloud Computing, and where it is headed and what it is evolving into, let's not get too far ahead of ourselves.

When companies wanted to get out of paying the capital expense of their computing hardware, software, and storage arrays, etc. they outsourced it to companies like Savvis, Terremark, and Navisite where they pay one monthly nut for new hardware, software, and 24x7x365 support by people who know what they're doing.

When I look at cloud, it's the same thing on a micro level...

'I need disk, memory, and CPU cycles and I need to rent them. I need them to be elastic, on demand, and automatically provisioned.' Isn't it managed services with an hour by hour/day by day/week by week contract? All the components are cash bar vs. open bar?

Is this just an accounting exercise?

Monday, June 15, 2009

Did you announce you're building a data center too?

There have been several announcements recently about new data centers being built. The mean average per square foot I can figure out is $2,500 per square foot.

So that means that Apples announcement will peg the North Carolina site at 400,000 square feet for the $1B price tag. I am glad I am not an Apple investor or I would be flipping out on a company that is not a data center center builder, manager, or operator spending my money on something that is not a core competency. Especially a $B of my money. Who is running Apple, Obama?

The Yahoo announcement for Lockport, NY and their 60,000 sq ft planned build was interesting too. $150M project and it creates 75 jobs. That's $2M per job created. Again, is the Government making the decisions here? At $2M a job, I want to run the job

Cisco is going to build a 160K square foot building in Allen Texas. $40M is what that will come out to. I would be a little less angry at Cisco for spending my invested money on a non-core competency, but I could maybe justify it as a really sparkly new kick ass laboratory for new stuff.

IBM announced a 6,000 square foot facility for $12.4M. A steal at $2K/ft.

It's kind of funny in that I had 4 requests come in days apart about a month ago asking about build vs. buy whitepapers and current examples. Looks like the big boys went build and I still don't understand why.

The tough questions I would have asked early in the process:

1. What happens when we don't use all the space? How do we turn a data center from single tenant to multi tenant?
2. What is it that makes us think we can run a data center better than those people who do it for a living 100% of the time?
3. What SLA's will our employees sign when we hire them?
4. Where else could we spend this money? To grow revenue? Buyback stock? Create Jobs?
5. What other EXISTING sites were considered that would have led to less of a carbon footprint, less damage to the environment, and been greener?

If you want tough questions to ask for your project contact me.

The other thing that I noticed is that all of these data centers do NOTHING for the impending crisis of data center space that has been widely reported. Except maybe for those companies that are building them.

And what happens when they use 30% of the draw that they contracted for and the financial operating obligations are 70% more than budgeted because they are obligated to pay for their scale up whether it happens or not? How will they justify this to investors?

Believe me, I think data centers are a great investment, and safer than gold right now.

For data center companies.

Thursday, May 28, 2009

Did the Cloud Rain on SiCortex?

http://www.xconomy.com/boston/2009/05/28/sicortex-out-of-cash-powers-down/

SiCortex, a six-year-old startup building energy-efficient supercomputers in Maynard, MA, has shut its doors. The company ran out of working capital and was unable to raise more from its venture investors, according to a report that surfaced yesterday in HPC Wire, a trade publication in the high-performance computing industry.

Xconomy obtained confirmation of the shutdown news from John Mucci, the founding CEO of SiCortex, who was replaced 10 months ago by current CEO Christopher Stone. Stone himself was not immediately available for comment.

“It is a sad day for all… less competition, unemployed seventy some workers…” Mucci said in an e-mail.

SiCortex had raised $42 million in venture capital, including a $21 million Series A round in 2004 contributed by lead investor Polaris Venture Partners and syndicate members Flagship Ventures, JK&B Capital, and Prism Venture Partners. All of those investors returned for the company’s $21 million Series B round in 2006, with the addition of new lead investor Chevron Technology Ventures. Bob Metcalfe, a Polaris partner frequently quoted about SiCortex in the past, said this morning he had no comment about the shutdown.

According to the HPC Wire report, which was based on information from an anonymous source close to the company, most of the company’s employees have been let go, and a sale of the company’s assets is underway.

It's about the power...

http://www.cio-today.com/news/Blade-Servers-Are-Energy-Hogs/story.xhtml?story_id=1230080ZP1RO

This article today finally backed up what I have been blogging about which is that Blades (heavily used in Virtualization deployments) are power hogs. So while you just freed up 5,000 sq feet in your data center, you doubled your draw and D'oh! you ran out of power and can't cool the shiny new virtualized environment.

The closing quote in the article:

If new blade servers crash because the data center overheats, that's not very efficient, is it?

Is the least of my worries. How to Justfy a 7 or 8 figure purchase that you can't light up and run key apps on is a bigger problem, no matter how efficient it makes IT.

Glad to see someone else paying attention to this...

Thursday, May 21, 2009

Dell's Fortuna - A web hosters dream?

I met with Drew last week and he had alluded to this being available soon. This could be a great way for the managed services providers to maximize revenue per square foot.

Wednesday, May 20, 2009

The New Twitter

Are you Green? Are you REC-less?

I have been looking into renewable energy sources for data centers a lot lately. Why? Because it is responsible, necessary and a way to stimulate the changes that must occur in the nation's electricity grid that is up to 100 years old in places.



The whole Virtualization movement got me thinking about this a couple of years ago, because I noticed that when my customers deployed blades and other high density solutions to support virtualization a funny thing happened - electricity draw to the cabinets almost doubled.



So the footprint shrinks but the electricity (and cooling) goes up dramatically. So I asked the next question which was - if I double my energy draw and that energy is not green (wind, biomass, nuclear, etc.) how am I reducing carbon? Hmmmm. Maybe I'm not. Oh but wait, my utility lets be purchase RECs - Renewable Energy Certificates so I am all set.



Next question - aren't REC's like putting a 'save the earth' sticker on a Hummer? It's feel good green - not measureable green.



So What is the answer? How do you get truly green? How do you move beyond the 'we print on both sides of the paper' and 'we turn off the lights at the diesel refinery' feel good green into actual, 'I am doing something that is not only measurable but so green it's almost black' when looking at our IT infrastructure?



Stay tuned for more. There is a way, using 'off the shelf ' technology and sources of power to do this. And it will save you roughly $6,000/year per cabinet/rack on electricity alone, let alone what the carbon reduction is. Can you say FAT bonus?

Wednesday, May 13, 2009

Data Center Inventory - storm on the horizon?

I have had several discussions the past 7 days about data center real estateand I am beginning to wonder if a storm is brewing just over the horizon.

A lot of new construction projects got mothballed or slowed this year because the money we gave to the banks apparently had to go to bonuses first instead of funding data center builds to house the infrastructure they will need when people start making and depositing money again, and they have to start processing checks to pay back the US Taxpayer. I digress...

In my daily Tier 1 email I have noticed a lot of small data center announcements - 1,500 sq.feet here, 2,000 there and I say 'OK, that's some but what about the 5 MW or 30MW sites?' I haven't seen one of those announcements in some time.

What gets me is that there are now 12 megapixel cameras on the market and my guess is by Christmas they'll be a 30 megapixel camera for those of us who want to see past the red eye down to the retinal pattern in the back of the subject's eye. There is also the legislation working its way through the US Government about being more green and ultimately more digitally centric. So what?

That means there is more data being produced. Bigger files that will be moved, copied, stored in multiple locations, shared, viewed, and even printed (great news for loggers). The production of data never rests so the need to store it, manage it, secure it, back it up, share it and consume it grows every second of every day. All of the capacity planning we did 18 months ago evaporates - sometimes slowly, sometimes quickly.

Long story short - I believe there is an impending shortage of data center inventory out there.

Available data center inventory - the multi megawatt kind. Not the kind in Ashburn VA where Digital Realty Trust is digging up ground to expand and will need the utility company to run conduit to the facility before it is lit, not the kind in Boston where CRG West has 100K square feet and a dozen megawatts of space and power but it's I beams and concrete today an it'll be January before it is finished off. Not the kind of data center space like Dupont Fabros has but their financials are suspect if not downright shaky right down to their flywheels making it hard to attract the customers who worry about financial condition.

Shop smart and buy now - get a sublet clause in the lease as well. This has great opportunity for those of us who get it...

Monday, May 11, 2009

Interesting app for those of us looking to go green...

I just spoke to a friend of mine and we were catching up on life. The conversation turned to what are you doing to be more green (I am a far cry from a tree hugger) and have I see a cool application at RoofRay.com. I don't think solar has the potential for data centers yet, but for all the computing horsepower at the house, maybe.

Having spent just 5 minutes on the site and looking at my house and what the solar potential was, he got my attention. Take a look or check out the widget on the bottom virtualizationstuff. It may get you through your next conference call...

http://www.roofray.com/

Wednesday, April 29, 2009

Data Center Containers - a black licorice product?

I sincerely hope Tier 1 does a Data Center Transformation Summit twice a year. The one I was at yesterday was excellent and was topped off by dinner with the rock stars over at Horizon Data Center Solutions in Reston, VA. Then this morning our press release hit the wires which was great to wake up to.

One theme I saw again and again was people either 'get' containers or they dont. Chris Crosby over at Digital Realty Trust doesn't like them. Nor does the CTO at Equinix, Dave Pickut. Chris had a presentation that was full of inconsistencies that I will attribute to DLR being a REIT vs. a data center owner/operator. REITs view the container as competition - it's not real estate so they're not interested because they can't sell it. Given Mike Manos' move over there and his work in deploying containers for Microsoft makes me wonder if DLR will shed its REIT status and broaden its offerings. Nah. A REIT is a REIT and they have a lot of customers who rely on them for space.


Equinix (and others) don't like them, I believe, for two reasons -

1. They house high density equipment, up to 27 Kw per cabinet. The standard is 4.8 Kw, which means you need to allocate more floor space for air flow and that's floor space you cant sell to the 4.8 Kw cabinet customers.

2. The data centers they are in can't support them from a deployment perspective. Something new they can't support.

Here is why I believe they cannot ignore the data center container (DCC) market - the efficiency is unprecedented for the draw, the footprint, and the cost. You will save $6,000 per cabinet per year putting it in a container on the PUE gains alone (Power Utilization Efficiency). When all you need is a 600 AMP feed and a 4" water pipe connected to a water source of 65 degress, you're done.

So what's holding the proliferation back?

Basically 18 months ago before the economy really tanked, companies refreshed a lot of servers and other IT gear. They bought a lot of VMWare, and stood up a lot of new stuff. New stuff that was leased. On a 36 month lease. We are 18 or so months into a lease cycle for a lot of companies that have their hands tied and can't move to a DCC if they wanted to.

So what I see starting to happen is that companies are giving DCCs a look because bonuses are being tied to reducing carbon emissions, consuption, and doing more with renewable energy sources. DCC's will give them a healthy bonus. That tied with being able to lease extreme density gear self contained and from PO to plug in of 10 weeks tops - look out.

So right now the DCC's are a black licorice product - you either love em or you hate 'em. I believe more people will be loving them in the next 18-24 months because it's green ($) for them to do it.

Tuesday, April 28, 2009

Digital Realty Trust Presents

From Chris Crosby:

Power is the commodity of the information age - I agree

If you buy square feet in a data center you will get screwed - buy Kw/load - Agreed

If you are thinking about functional obsalescence you are thinking about it wrong -???

'I bet you've never had the FBI raid the cloud' - Duh, only one company has had it happen.

Pitching the modular pod approach - wash rinse repeat, build to be the same -that's great if requirements don't change

Containers are not they way to go - 100% wrong on all data points - this will get a separate blog entry - He completely missed the boat.

Covering the Data Center Transformation Summit 2009

I will follow this on twitter as well: http://www.twitter.com/mmacauley

Hot topics:

Selective outsourcing is available yet under utilized
-Wholesale & resale

Financial stability of providers is key
- There is a lot of legacy debt from 2001 for a lot of players

Best of times for the data center industry
- Banks hate data centers
- Hard to get construction loans
- Data centers are the safe bet vs. mortgages to high credit risks
- Have the banks done that well in picking good loan programs? $100M is safer in a data center project than in high risk mortgages yet in the banks eyes there is no difference

Thursday, April 23, 2009

vSphere 4 - VMWare Turbo?

I love this quote taken from information week

In its initial iteration, vSphere 4 can manage up to 1,280 virtual machines on 32 servers, or an average 40 VMs per server. Each server may have up to 64 cores, such as eight-way server with eight cores per CPU, for a total of 2,048 cores; each server may host 32 TB of RAM. VSphere 4 can also manage 8,000 network ports and 16 PB of storage.

and this one:

Network throughput is now 30 Gbps compared with the previous 9 Gbps. A virtual machine can have up to 300,000 operations per second versus 100,000, and the new maximum number of transactions per second is 8,900. The latter figure is five times the total transactions per second of the Visa network worldwide, said Steve Herrod, CTO of VMware. VSphere 4 includes Distributed Power Management, which monitors running virtual machines and moves them off underutilized servers, which is shut down, to a server running closer to capacity. By adopting Distributed Power Management, a VMware customer can save 20% of his power consumption, said Herrod. If all VMware customers implemented it, the saved power would be enough to supply the nation of Denmark for 10 days, he claimed. He then quipped it would be enough to serve Las Vegas for nine minutes. VSphere 4 includes VMware Host Profiles, which are golden images of desired server configurations. By referring to a host profile, a vSphere user can generate a new virtual machine and know its properties, according to Herrod.

My Crystal Ball says...

I finally had a chance to catch up on a few things early this morning and one of them was the pile of press clippings and RSS feeds.

Two predictions based on 100% personal speculation based on press mentions & reading:

Cisco buys EMC
Dell buys Egenera
Oracle sells Sun hardware business to IBM

That's all for now.
Feel free to tell me how out of my mind I am...

Monday, April 13, 2009

Is a Green Data Center even possible?

This was the question running through my mind this weekend, and assuming it was, who would build it, what makes it green, and is it a standard? Yes, my mind was quite active this weekend…

The question came up after a chat I had with Austin Energy and the State of Michigan last week about wind power projects – both current and future. Why wind you might ask? Because right now it is fashionable, generates more electricity than solar on the same footprint, and it works. In the data center world however, you plan for the crazy what if’s, in this case no wind for a month, so you do need to have back up at the ready, preferably Natural Gas. T. Boone Pickens has studied this extensively.

So let’s consider that we have access to wind generation, the issue that I have seen is that the grid, at least the ERCOT one, is not designed to have more generation that it has today. So you can generate thousands of megawatts from May-October during the windy ‘tornado’ season but because the transmission grid can’t do anything with the capacity being generated you turn off the generation capability so you don’t bring down the entire US power grid. Damned if you do, damned if you don't.

Given how much electricity data centers consume and need to sustain growth, it seems there should be a louder voice out there telling utilities to upgrade their power systems. There isn’t though. Why? Money. If I own a utility that gets a lot of electricity generated by wind (inexpensive), then any upgrades I make to my power system are going to help my competition more than me (potentially). So they don’t get done.

So while there is all this effort out there being spent on building generation, until we give the Grid an upgrade it will be the equivalent of watching high definition TV over dial up. There will be a lot more content (electricity) than availability to deliver (bandwidth).

I believe this is what ultimately will hold back expansion the most, of both data centers AND the proliferation of getting green energy to some of the largest consumers of power out there. Your friendly, stark, neighborhood data center.

Wednesday, April 8, 2009

The Big Switch

For anyone looking for a GREAT read about where I believe computing is heading pick up a copy of Nicolas Carr's The Big Switch. It discusses virtualization and the rise of Cloud Computing. It's a couple hundred pages of great stuff to think about. I just finished it on my trip down to Reston, VA and will likely re-read it. It was that interesting.

Wednesday, April 1, 2009

Cloud Computing is the New Black

I am at the Virtualization (and Cloud Computing) conference in Manhattan and was here to meet with the cloud computing Czar from HP and 3tera, Right Scale, vKernel and others and had a chance to walk the intimate expo floor and see what vendors were doing. The thought that hit me was that Cloud Computing is the New Black.

The level of confusion that is out there is staggering - both on the marketplace side as well as the consumer side. I saw a fair number of the reveloving slide decks up on LCDs and they all felt compelled to define, yet again, what cloud is. To them.

For Rackspace the cloud offering is a service, for others an IT initiative to reduce costs, for others a chance to offer consulting services and for others it was all about the software solution they were pitching.

There were a lot of people trying to figure out what it was and my observation was that they were as confused when they left as when they got there.

One area that was not even discussed let alone on display was the cloud enablement, and I think it is more important than any one solution. What I mean by cloud enablement is a set of components that are already stitched together to facilitate the use of the benefits of compute on demand.

My metaphor that I have been using at CRG West is that we want to be Madison Square Garden not the performance that is using it as a venue. In other words vendors are focused on whether or not they want to be U2 or Stars on Ice, vs. the place where people (buyers) come to experience what it is they want to experience.

So we have started to build the new Madison Square Garden so when companies want the New Black, in whatever shade or size they need, we will be the place they come to experience it.

Thursday, March 26, 2009

Podapalooza - Good time in Boston

The other night in Boston, I hosted a cocktail reception and dinner for local folks to come to the Westin in Boston and check out the HP POD, enjoy me buying them drinks and pulling together a lobster dinner/clambake. All of it amounted to a GREAT time.

I also wanted to do a little shameless self promotion on my blog, because I am starting to see the shift from pod computing being a WTF? solution to an interesting solution and people are starting to truly grasp how and why they make sense. A couple of examples from discussions over lobsters:

I could see using this to expand into a data center while my new data center is being built. I can use the PODs for two years and then drive them over to the data center when they have served their short term purpose and now I have a backup solution that I can put anywhere my risk managers want.

For genome sequencing this is ideal. We are no longer constrained by scheduling jobs. I know what I have for CPU cycles, memory, everything - and I can just run job after job after job. It will be the workhorse our facilities guys always gave us flack about building becuase we couldn't cool it.

I think when companies start spending again that the pent up demand will be such that if anyone has to get something up and running in a quarter - these PODS will be the best game in town. Imagine - a data center racked, stacked, and tested - delivered in 8 weeks. Wow. Just wow.

We will try to do more of these events in the near future. They are valuable, small, no spin, and FUN!





CRG WEST ANNOUNCES HP POD COMPATIBLE DATA CENTERS

CRG West Makes Deploying, Powering, and Securing HP Performance-Optimized Data Centers Easy

DENVER, CO– March 25, 2009 – CRG West is pleased to announce compatibility at multiple data centers across the United States with HP POD (Performance-Optimized Datacenter), a container-based data center offering. CRG West data centers in Boston, Los Angeles, and the San Francisco Bay Area can monitor, power, and secure HP PODs with unmatched efficiency.

HP PODs are a flexible, expedited solution for companies looking to augment existing data center infrastructure or deploy a ready-to-use data center IT solution. They support a wide variety of HP and third-party technology, increased power density supporting 3,500+ compute nodes or 12,000 large-form-factor hard drives and flexible configurations to meet specific customer requirements. In addition, HP offers supporting infrastructure services, such as assessment, preparation and deployment services, as well as data center design and planning. Each HP POD comes ready to use in a 40-foot container and requires approximately one megawatt to power.

Customers facing infrastructure, space, or power supply constraints can take advantage of this innovative technology by deploying their HP POD in a CRG West HP POD compatible data center. CRG West provides more than 2,000,000 square feet of robust, carrier-neutral data center space and 150+ megawatts of power across the United States. HP POD compatible data centers offer tailor-made deployment capabilities, with increased ceiling height, ample power and water supply, and crane-lift capable space, allowing for a deployment process that is as easy as “deliver, position, and power on.”

In doing so, customers can also take advantage of everyday CRG West amenities such as usage-based power pricing; 24-hour remote hands and security; redundant emergency generator power; access to 200+ carriers and ISPs; and a convenient on-line customer portal that allows for real-time power use tracking from any computer with an internet connection. In addition, utilizing CRG West data centers can eliminate the possibility of an outside deployment, where exposure to weather and other elements can require additional planning.

“CRG West HP POD compatible data centers make it easy for any prospective buyer to take advantage of the many perks, without jumping through power supply or space constraint hurdles,” said Mark Mac Auley, CRG West Vice President of Strategic Accounts. “These locations make an HP POD deployment as easy as ready-position-power, in a highly secure, protected CRG West data center environment.”

“Customers can maintain a competitive edge in the marketplace without having to invest in another building in order to build out their infrastructure,” said Steve Cumings, director of infrastructure, Scalable Computing and Infrastructure organization at HP. “The HP POD combined with CRG West’s nationwide, turn-key data center capabilities gives customers an easier way to deploy IT resources while minimizing expenses and improving data center efficiency.”

More information about the HP POD products, software and services is available at www.hp.com/products/pod.

An HP POD virtual video tour is available at www.hp.com/go/pod.

About CRG West:

CRG West is a leading developer, manager and operator of world-class data centers. Established in 2001, CRG West provides wholesale data center space and colocation, connectivity services, remote hands support and a public Internet peering exchange, the Any2 Exchange. A wholly-owned portfolio company and operating partner of The Carlyle Group, a global private equity firm with over $91 billion of equity under management, CRG West manages carrier-neutral data centers in Boston, Chicago, Los Angeles, Miami, New York, Northern Virginia, the San Francisco Bay Area, and Washington D.C. CRG West provides data center and peering opportunities to more than 500 of the world’s leading networks, enterprises, government institutions and universities. Visit www.crgwest.com for additional information.

Wednesday, March 18, 2009

It's all about Cosmos Computing

Clouds are too small for what is possible in the new computing paradigm.

Clouds are more than a Visio object or a piece of Powerpoint Clip Art.

Cloud up until now has been about defining what we know based on old models that are known. Grid, Utility, etc. Where we are going and what we are doing is larger than that - it is at the Cosmos level.

We can define the components but not the end state. We don't know what we
don't know but we understand that this is bigger than the sum of its known parts.

Clouds work because the Cosmos works

The Cosmos players are bringing the open structures, protocols, and relationships together that allow the components of Cloud computing to work.

Cloud computing has many interpretations, the Cosmos doesn't. It is. It works. It has worked for years and is now evolving. Again.

The Cosmos is elastic, resilient, interconnected, open, limitless, expanding, and alive.

The Cosmos binds the defined and undefined together into a system that is bigger than all of us, comprehensible in its existence, but not in its nature.

We Nephologists have much to learn about its nature...

Tuesday, March 17, 2009

The Scale of Cloud Computing

I had an interesting chat with someone at one of the juggernauts in the space today and we were trading stories about Cloud Computing presentations. His comment to me was 'I saw one this morning and it showed 3 servers on it. Do these Cloud guys understand that what they are showing isn't even a test environment in the cloud'.

Exactly why I coined the term Cosmos Computing. The cloud is too small.

We are talking billions of computing components stitched together in some definable but not fully understandable way so that we can use them when we need to. Clouds are but part of and a subset of Cosmos computing.

Stay tuned while I ruminate on this some more.

Quote of the day - Senator Grassley discussing AIG Execs

A prominent U.S. senator has intimated that executives of the troubled insurer American International Group Inc might consider suicide, adopting what he called a Japanese approach to taking responsibility for their actions.

Senator Charles Grassley, the top Republican on the Senate Finance Committee, made his comments on the Cedar Rapids, Iowa, radio station WMT on Monday.

"The first thing that would make me feel a little bit better toward them (is) if they'd follow the Japanese example and come before the American people and take that deep bow and say, I'm sorry, and then either do one of two things: resign or go commit suicide," Grassley said.

"And in the case of the Japanese," he added, "they usually commit suicide before they make any apology."

http://www.reuters.com/article/ousiv/idUSTRE52G3BQ20090317

Friday, March 13, 2009

Cloud Computing is Dead - Long Live Cloud Computing!

I have been on my share of webcasts and seen over 100 presentations about clouds.

You know what they all start with – a definition of what cloud is.

Why?

Is it that confusing? Is it that unclear?

Why is it so confusing and so unclear?

By rehashing and refining a definition, do we hope to somehow impart some clarity to the emerging space?

Cloud computing is unclear. Cloud computing is emerging.

Cloud computing is a paradigm shift and the approach thus far has been to address it like it was not a paradigm or business model shift. The ‘let’s put the Ferrari Red paint on the Apple Cart and call it a Ferrari’ approach.

Why is it a paradigm shift?

Cloud computing is a paradigm shift because there are cosmic forces at work ripping apart, re-forming, and creating new things out of what is and what was every day.

Cloud computing – in my opinion and in its nature - does three major things:

1. Centralizes availability computing resources and components into ‘galaxies’
2. Distributes the galaxies into a collective pool of computing (universe)
3. That universe expands and morphs creating and destroying galaxies

There is the confusion because the definition does not fit the description.

If I am a hardware vendor – I have been in the business of selling hardware. Is hardware the end all be all of computing – no! I need software, networking, and other pieces to make it part of the universe.

If I am a software vendor – I sell the software that provides the ability for me to do something on the hardware, and in conjunction with the network, other software and other pieces in the universe.

If I am a network company – I may build and make hardware AND software, but unless it is connected to other pieces then I am not part of the universe.

So what is cloud computing?

I believe it is not Cloud Computing but ‘Cosmos’ computing.

I really can’t define it in and of itself, but can metaphorically in that it is the equivalent of our Cosmos - universe(s) – much is known, exploration is happening every moment of everyday, and the forces that govern the universe(s) – known and unknown – are at work keeping things in some sort of balance allowing it to work.

I will keep on ruminating on the definition as it goes way above cloud in my opinion...

Thursday, March 12, 2009

Clouds are still...Well...Cloudy

I am catching up on my industry reading this morning and the recurring theme that is (still) enmeshed in the notion of cloud computing is one of cloudiness.

I was on a webcast with Surgient yesterday and bailed after 5 or 6 slides. Why? No useful information was presented. What I mean by that specifically is that the first several slides were all about Surgient and their product positioning. Stuff I could care less about and quite frankly stuff I could get from the 'Products' tab or 'About Us' tab on their website.

Note to webcasters - I need to understand how your product works by you showing me how others have made it work. I do not need to know who the founders are. The obligitory logo slide needs to tell me why these guys use your stuff and how they use your stuff. Answer the question 'So What' in your first 5 minutes or I'm gone.

One of their applications is for setting up 'internal clouds'. What??? Is that the new buzzword to get CFO's to buy new computers?

The 451 Group did a survey in October 2008 and 84% of respondents said they had no plans for internal clouds. If I has to guess why my top guesses are:

1. There is no cloud strategy in place, so no funding for stuff you can't define and understand. See Toxic Assets.

2. CFO's have figured out that a cloud is tech speak for for shiny new computers that will sit underutilized until their leases are up. Again.

3.Why would you build out a cloud for yourself vs. use the ones out there?

One thing that has seemed to gel when Cloud Computing is discussed is that there are generally 3 flavors - SaaS, PaaS, and IaaS. Software, Platform, and Infrastrcuture as a Service respectively. Using these definitions as jumping off points, an internal cloud becomes centralized infrastructure as a service which already exists in some form in the largest companies.

There is much to be done in making the notion of Cloud Computing understandable by CIO's, CFO's and CEO's since they must decide how good it is for their business.

Tuesday, March 10, 2009

I like this stimulus plan better

http://www.microsoftstartupzone.com/Blogs/the_next_big_thing/Lists/Posts/Post.aspx?List=6bab7b08%2D81ca%2D4602%2Dbd97%2D4b7b2c893e88&ID=654

Thomas Friedman of the New York Times today (2/29/2009) wrote “Startup the Risk Takers” where he suggests the US government should stimulate the economy by funding startups, not by bailing out GM and Chrysler.

“You want to spend $20 billion of taxpayer money creating jobs? Fine. Call up the top 20 venture capital firms in America, which are short of cash today because their partners — university endowments and pension funds — are tapped out, and make them this offer: The US Treasury will give you each up to $1 billion to fund the best venture capital ideas that have come your way. If they go bust, we all lose. If any of them turns out to be the next Microsoft or Intel, taxpayers will give you 20 percent of the investors’ upside and keep 80 percent for themselves.”

“GM has become a giant wealth- destruction machine — possibly the biggest in history — and it is time that it and Chrysler were put into bankruptcy so they can truly start over under new management with new labor agreements and new visions. When it comes to helping companies, precious public money should focus on start-ups, not bailouts.”

Friedman’s main point is this; invest in the future (startups), not the past (bailouts). Startups are the best opportunity to create new jobs in high growth industries, and do it fast. According to the SBA small businesses created 70% of all new jobs in 2007, and account for about 50% of all employment in the US economy. Small business is defined as 1-500 employees.

Fred Wilson of Union Square Ventures says “No Thanks”. Fred writes “The venture capital business, thankfully, does not need any more capital. It's got too much money in it, not too little. Just ask the limited partners who have been overfunding the venture capital business for the past 15-20 years what they think. You don't even need to ask them. They are taking money out of the sector because the returns have been weak.”

Government wants to create jobs, VCs want to create wealth – Do VC backed companies create jobs? Yes, but their primary objective is to create wealth. VCs invest about $25 billion a year in over 3,500 companies. These companies grow fast and create jobs. But, VCs primarily fund technology companies with fewer employees and explosive growth potential. Most VCs only fund 2% – 5% of the companies they see, leaving 95% unfunded. These might be the companies that create lots more jobs.

Service based companies create lots of jobs, but don’t get VC funding.- What about all the viable companies that don’t meet the 10X return potential that most VCs demand? Would a VC have funded Wal-Mart, McDonalds, Subway, or ServePro when they were small startups? No way. Service based companies don’t fit the VC model. But, a successful service company might have 5,000 employees, compared to a successful technology company with 500 employees. The government should focus its stimulus money on programs that create jobs. New approaches are needed, but some existing programs are listed below.

Create 50,000 startups for $1B - How about funding for startup incubators similar to Ycombinator and TechStars. These startup mentorship programs are very successful in creating startups and driving innovation but they only cover 30 to 50 companies a year, investing about $20K in each one. Why not start 50,000 companies a year? It would only cost $1B, and could incubate the next Facebook, Google, or Microsoft…and millions of jobs. How many jobs will $1B invested in General Motors create? Government funding for VCs is probably not a good idea, but maybe a small amount of funding for startup incubators would create the jobs that VC backed companies won’t. $1B invested in 50,000 startups would conservatively create 250,000 jobs immediately. If just 2% of those startups became wildly successful they would create 3 million to 5 million jobs.

What can government do? Government should create incentives for investment. It is probably best not to make the investments directly. There are already some good programs and incentives in place that have been forgotten or underfunded for too long. Pouring money into these programs is certain to stimulate investment, inspire innovation, and create jobs. Here are a few examples of programs that stimulate investment from the private sector.

SBIC – Small Business Investment Company A program managed by the SBA, SBICs are privately owned and managed investment funds, that use their own capital plus funds borrowed with an SBA guarantee to make equity and debt investments in qualifying small businesses. The U.S. Small Business Administration does not invest directly into small business through the SBIC Program. This program encourages private investment in startups and small businesses.

SBIR – Small Business Innovation Research Another SBA managed program, the SBIR encourages small business to explore their technological potential and provides the incentive to profit from its commercialization. SBIR targets the entrepreneurial sector because that is where most innovation and innovators thrive. By reserving a specific percentage of federal R&D funds for small business, SBIR protects the small business and enables it to compete on the same level as larger businesses. SBIR funds the critical startup and development stages and it encourages the commercialization of the technology, product, or service, which, in turn, stimulates the U.S. economy.

R&D Tax Credits – There a 20% R&D tax credit for “incremental” R&D spending, over and above what you spent the previous year. There are lots of restrictions on expenses that qualify. Unfortunately startups are limited to a 3% credit because they don’t have lots of prior years spending base. This program could be significantly enhanced by loosening the restrictions, which would in turn stimulate research investment.

Seed Capital Tax Credit – Maine has a 40% Seed Capital Tax Credit for startups, that can go as high as 60% for startups based in high unemployment areas. This is a good example of targeted support for private investment to create startups, jobs, and innovation. More states should adopt Seed Capital investment tax credits.

There are probably lots of other little known programs that support investment in startups. We need to publicize them and invest more in them. Please leave a comment and a link to your favorite program.

Thursday, March 5, 2009

The Story (and Myth) of Metered Power

I was in Dallas, TX this week meeting with a number of companies from Real Estate Brokers to Data Center partners to enterprises. One thing I heard three times was that metered power isn't always metered power - it all has to do with where you put the meter.

At my company we put the meter on the panel that feeds the servers, switches and the gear you house with us. I thought that was the norm and industry standard. Boy was I wrong!

I found out in the course of discussions that some colocation providers put meters on the outside of the building or pod. What's the difference you ask? Big difference!

If you take the base colocation rent that is let's say 100Kw of critical load (draw) and multiply it times .5 and add the 50Kw to the rent to cover cooling and infrastructure, You pay for 150Kw of potential draw and that's what a provider will give you. That's great if the critical draw is 100Kw going to your equipment. Makes sense.

If the meter is placed on the corner of the building or outside the pod, not only are you paying for the critical draw, you are paying for lights, copiers, office space A/C, and everything else NOT related to your gear. Plus 50%. Ouch.

Buyer beware and be sure legal catches this. At $200Kw you could be paying 100-150% of the price per Kw to run signage that's not yours, coffee machines you'll never drink a drop from, and offices that are kept at 64 degrees after hours. Not a good deal, in this or any economic climate.

Tuesday, March 3, 2009

Virtualizationstuff is now cross posted

Just a heads up to my readers - I have cross posted this blog at CRG West and you can take a visit here:

Virtualizationstuff at CRG West

I will remain opinionated, and call it as I see it, so fear not loyal readers...

Tuesday, February 3, 2009

Interesting Article on the Electricity and Efficiency of a Data Center

Having just met with the utility mentioned in this article last week, I thought it was an interesting read because it points out a major disconnect between greenness/efficiency and SLA's which drive the data center business.

I think there is a market for this kind of approach, however it is the segment of the market that does not require an SLA of any kind or has live-live replicated environments.


Published: Tuesday, Feb. 03, 2009 | Page 1B
A data center under construction at McClellan Park has won the nation's highest green-building rating for its groundbreaking, energy-saving, low-carbon-emitting "air-side economizers."

Engineers call it "free cooling" technology.

Anyone else would call it opening the windows.

"Outside air is an absolute must now," said Bob Sesse, chief architect at Advanced Data Centers of San Francisco, developer of the McClellan project. "It never made any sense not to open the window."

The windows in this case are 15-foot-high metal louvers that run the 300-foot length of the building, a former military radar repair shop. Inside, a parallel bank of fans will pull outside air through the filter-backed louvers, directing it down aisles of toasty computer servers.

The company figures it can get by without electric chilling as much as 75 percent of the year, thanks in part to Delta breezes that bring nighttime relief to the otherwise sizzling Sacramento region.

Data centers are among the biggest energy hogs on the grid. These well-secured facilities are the "disaster recovery" backups for banks, insurers, credit card companies, government agencies, the health care industry and other businesses that risk severe disruption when computer systems fail.

They house rows of high-powered computer servers, storage devices and network equipment – all humming and spewing heat around the clock, every day of the year.

Large "server farms" rival a jumbo-jet hangar in size and an oil refinery in power use. California has hundreds of them, including RagingWire Enterprise Solutions and Herakles Data in the Sacramento region.

Almost all are designed so that only a small amount of outside air enters, lest dust invade the high-tech equipment's inner circuitry and short out the works. Indoor air chillers run with abandon to make absolutely sure the vital processors don't overheat and crash.

Now, with energy prices rising, bottom lines dropping and computer equipment becoming more powerful – with hotter exhaust to prove it – data center developers realize the perfect computing environment is the enemy of the good.

"It's a return-on-investment question," said Clifton Lemon, vice president of marketing for Rumsey Engineers, an Oakland firm that specializes in green-building design.

"People are beginning to realize they can build data centers with the same performance, reliability and safety and save lots of money on electricity."

Manufacturers of data-center equipment are fast meeting the energy challenge, said KC Mares, a Bay Area energy-efficiency consultant.

"Just in the last several months, we are seeing a new generation of servers giving us dramatically increased performance while consuming no more power than the previous generation," Mares said.

The green factor also looms larger in the marketing of data centers.

"We have recently replaced our air-cooled chiller system with a more efficient water-cooled system and doubled our cooling capacity," Herakles touts on its Web site.

Not to be outdone, RagingWire boasts on its own site of a recent savings of 250,000 kilowatt-hours a month – enough to power about 1,700 homes in the Sacramento area – by improving its chilled water plant and cooling efficiency.

But data centers continue to run computer-room air conditioners during many hours in which the outside air is cool enough to do the job, according to engineers who research high-tech buildings for the federal Energy Department's Lawrence Berkeley National Laboratory.

"We wondered, 'Why are people doing this?' and what we found out is that the data industry had grown up this way back from the days of mainframes," said William Tschudi, principal researcher in the lab's high-tech energy unit.

The tape drives and paper punch cards in the giant mainframe computers of the 1960s and 70s were more sensitive to dirt than today's equipment. But while computing hardware grew more resilient to the elements, manufacturers held firm to their recommended ranges of operations on temperature, humidity and air quality.

"They had this myth built up that you had to have a closed system," Tschudi said.

Yet modern servers actually could handle a much broader range of environmental conditions, with protective coatings on circuit boards and hermetically sealed hard drives.

Microsoft Corp. broke a mental barrier last year when it tested a rack of servers for seven months in a tent outside one of its data centers in the rainy Seattle area. The equipment ran without fail, even when water dripped on the rack.

Similarly, recent experiments by the Berkeley lab engineers found that suspended particles in data centers drawing outside air for cooling were well within manufacturers' recommended ranges.

The "free cooling" system accounts for most of the 30 percent energy savings that Advanced Data Centers expects to gain over conventional data centers. The balance would come from efficiencies in electric fan systems, air chillers, lighting and battery backups. The project's first phase – a 70,000-square-foot center – is scheduled for completion this fall.

Last year, the company built a 45-megawatt substation to support long-term growth of up to 500,000 square feet across four buildings. As an incentive to follow through on its energy-saving plans, the Sacramento Municipal Utility District gave it a break on electricity rates, a discount worth $80,000 a year, said Mike Moreno, a key account manager with the utility.

Another bonus: SMUD, together with Pacific Gas and Electric Co. and California's other major utilities, recently gave Advanced Data a $150,000 "savings by design" award.

On top of that, the U.S. Green Building Council awarded the company its highest rating – platinum – under its Leadership in Energy and Environmental Design program.

If the center operates up to expectations, it will be the most energy-efficient data center known, Tschudi said.

"I think it's achievable."

Why High Density isn't Dense Enough

I have been involved in four discussions in as many days about High Density Hosting. When I ask - what does that mean to you? I usually get a number that is north of 125 watts a square foot up to 200 watts a square foot.

That is one way to look at it, but even at 200 watts a square foot, if your loading up a multi core box with dozens of VM's you will be headed north of that. Typically what happens is that hosting companies balk at the space required to cool 200 watts plus. Looking at power density is about power NOT space.

So in one conversation the company said that they wanted to deploy 10 - 30Kw cabinets. Total draw of 300Kw.If I throw in a PUE of 1.5 I am at 450 Kw for draw. If I have a 10,000 square foot pod that doesn't have a half a megawatt available in either current or generator power, it can be the best space on earth but you can't power it. No power = no CPU cycles, and no company.

Buy power, not space. Think about when the lights go out - you aren't thinking about what a nice neighborhood you live in or what a good investment it was, or how sweet the location is. You just want to nuke some popcorn and watch the Red Sox beat the Yankees.

Monday, February 2, 2009

Stirring the Pod (Container) a Little...

I just read a piece put out by the folks over at Tier 1 Research, part of the 451 Group. They point to a gap in understanding of a growing space - containers and their application and importance in the business of data centers.

Their biggest issues are with Scalability and Maintainability (See below). I don't get it. A snippet says:

What happens when a server reaches catastrophic failure? Do the containers themselves get treated as a computing pool? If so, at what point does the container itself get replaced due to hardware failures? There are just so many unknowns right now with containers that T1R is a bit skeptical

Interesting questions, however what happens when a server reaches catastrophic failure in a data center? Why is it different?

Do containers themselves get treated as a computing pool? - Duh. If you have a container in ~500 sq feet that is the equivalent of 4,000 square feet based on CPU density alone, I would think a pool of CPU cycles is a safe assumption.

Replacing containers are easier than replacing data centers. You can't ship an entire self contained data center and just plug it in. Determining the point of failure in a relatively new segment of the business seems a bit presumptive that these will fail more often than a data center would. Why would they fail more? They are servers in a very dense, very efficient controlled environment. My money is on they fail LESS not more.

Scalability is a non issue. They scale to over a dozen petabytes of storage per container. You can even stack them and save horizontal floor space, or in Dell's case to separate environmentals from compute.

For maintenance - there are some issues, yet do not point out a single one Having vetted most of the containers in the market - those from Sun, Microsoft, HP, Dell, Rackable, and Verari - by speaking with the heads of all the product groups, their sales people who talk to customers, and the facilities/install folks I will tell you that the analysis that was done was PURELY SPECULATIVE and not based on fact or experiential knowledge. If there is an Achilles heel on containers - the ones that run water through them will have maintenance issues. Stuff grows in water. Verari uses a refrigerant based solution that is a step in a thought out direction. But as for maintenance, if you can open a door, have serviced equipment in a traditional data center, and have worked on a server before you can handle a container.

'There are too many unknowns right now'? What?

Tier 1, you are a research firm - do the research. There are few unknowns in my book right now, other than, when do the containers and the equipment they contain get End of Life'ed. If I (a VP of a Data Center Company) can call up Dell, Microsoft, Verari, Rackable, and Sun, get an audience, delve into the technology with them, talk to end customers, and actually figure out the good the bad and the ugly with containers then surely you can too. And I can assure you that you also stand to benefit more than I do from containers. It is a new research area for which people pay you money. I do this for fun, and my research is done on my own time. The benefit to me is credibility and pure exploration into an area that is so compelling from an efficiency, economical, and common sense perspective that there are FEW unknowns at this point. Assuming you know the data center business. I know enough to be dangerous and know enough to know what is good for my business and my customers.

Further on in the piece Economics and credit markets are discussed. Containers are almost entirely OpEx - when leased, depreciated over 3-5 years or whenever the useful life of the servers is over for the application they serve. In other words, you don't need access to CapEx nor do you need to spend CapEx right now on these. Lease vs. own - it's a smart decision right now since containers are boxes of storage and CPU cycles and little else.

My 2009 Data Center trends will be covered in depth in an upcoming post, and I will tell you that Containers are one of the trends. They won't replace data centers any time soon, but they will present a very green and very compelling option for us all to look into.

mark.macauley@gmail.com


T1R INSIGHT: 2009 DATACENTER GROWTH TRENDS AND MODULARIZATION VS CONTAINERIZATION by Jason Schafer, Aleetalyn Schenesky

T1R has an overall positive outlook for the datacenter space, although due to the current tight debt market and the subsequent slowing of datacenter builds, there are persisting questions regarding datacenter trends for 2009 as well as thoughts on creative ways datacenters can increase their supply and/or footprint without expending huge amounts of capex (new datacenter builds can cost upwards of $32.5m for 25,000 square feet). As far as major datacenter trends for '09 go, T1R believes the three major trends will continue to be modularization (not to be confused with containerization), IT outsourcing and that datacenter providers with significant cash flow from operations, such as Equinix, will expand their market share as cash-light competitors are forced to slow expansion. As far as creative means for datacenters to increase their supply and footprint, T1R believes that there will be continuing M&A activity in '09 as smaller datacenters will continue to suffer from the lack of available capital and larger and regional datacenters (many of which are now owned by equity firms) look to increase their supply as utilization creeps up.


Modularization

Although modular datacenters are really in their infancy, T1R believes that the trend toward using modular datacenters will continue to gain momentum over the course of 2009. Currently, there are a few approaches to this concept including Turbine Air Systems' modular cooling system, where the system itself is pre-built and requires little to no on-site 'design and build' work (and time); and APC's pod-based rack systems, where the cooling and power is at the row level and can be installed as datacenter computing needs grow.

The idea of a modular datacenter is that it takes the current infrastructure pieces (cooling, switchgear, etc.) that have been traditionally custom built for each facility and makes it more assembly-line and mass-produced. The obvious benefits to this are standardization and ease/speed of installation (and presumably fewer issues after a design is proven and tested). A current disadvantage to this approach is maintenance. For example, maintaining many row-based cooling systems would be more complex to perform and track than the current traditional method (since currently cooling is more or less room-based and not row-based). The size of the equipment being serviced is decreased but the number of pieces of equipment is increased. In most cases, this means that total maintenance time is increased, as the size of the equipment isn't proportional to maintenance time. T1R believes that as the modular datacenter design matures, however, the maintenance issue will become less of a factor, especially when compared with its installation advantages.


Containerization

While T1R believes that modularization has potential significant advantages, we are currently not bullish on containerization. Containerization, literally, is building an entire datacenter inside a commercial shipping container. This includes the servers, the cooling, the power, etc. in one container. (Dell has changed this slightly with its double-decker solution where it uses a second container to house the cooling and power infrastructure separately from the server container. Some containers contain no power or cooling at all, but rather are designed to be a bit more efficient due to enhanced airflow and are dependent on an external source for power. T1R believes that these economics make even less sense.)

With a containerized approach, the datacenter gets shipped and plugged in and it's up and running, more or less. In T1R's opinion, the main issues with containers are scalability and maintainability. A container is a very enclosed space and, in a lot of cases, not very 'engineer-friendly.' What happens when a server reaches catastrophic failure? Do the containers themselves get treated as a computing pool? If so, at what point does the container itself get replaced due to hardware failures? There are just so many unknowns right now with containers that T1R is a bit skeptical. In addition, containers contain approximately 2,000-3,000 servers in one package and are sold in a huge chunk, so they tend to favor those that would benefit the most from such a product – namely, server manufacturers. Given the aforementioned unknowns and scaling and maintenance issues, T1R does not believe container datacenters to be solutions for the majority of datacenter customers.


Expansion via distressed assets?

With the credit crunch going strong, how will datacenters continue to increase supply and footprints? If history is to be our guide, the logical conclusion here is 'Hey, let’s go buy up all those distressed assets at fire sale prices from failing industries à la 2001.' Sounds like a great plan; however, the current situation is significantly different from the high-tech meltdown that allowed companies like Switch and Data, 365 Main, Equinix, Digital Realty Trust and CRG West, among others, to acquire large datacenter assets from telcos relatively cheap.

In 2001, datacenter builds by telcos were based largely on speculative revenues and business models, therefore parting with those assets in order to recoup some of the capex spend in view of what the telcos perceived to be a failing industry, was a no brainer. However, those were newly built, and for the most part, high-quality assets.

So what's on the market today? Well, potentially there are datacenter assets from financial and insurance institutions. The difference now, as opposed to then, is that these current enterprise assets are much lower quality. Certainly upgrades can be made in terms of things like power density, but the real problem is size – they're simply not large enough to be worthwhile for most major datacenter providers to be interested. And, let's remember, even for the few higher-quality assets that may be out there, turnaround time to remove old equipment and upgrade is approximately 12 months, resulting in no meaningful supply add until at least 2010. T1R does believe, however, that these limited, higher quality assets will be acquired in 2009.


Expansion via M&A

How are the datacenter providers increasing their supply and footprints, then? Aside from small expansions such as the one by DataChambers mentioned on January 29, T1R believes that for 2009 this will largely be accomplished through M&A activity, a trend that has already begun to emerge over the course of the last year.

In the second half of 2008, several M&A deals of this nature closed, including Digital Realty Trust acquiring datacenter property in NJ and Manchester, UK; Seaport Capital acquiring American Internet Services (a San Diego colocation provider), which in turn acquired Complex Drive (another San Diego colocation provider) a month later and Managed Data Holdings' acquisition of Stargate (a datacenter outside of Chicago) to continue its datacenter rollup strategy for expansion. In January of 2009, the first M&A deal of the year was announced when Center 7 (a portfolio company of Canopy Ventures, an early-stage venture capital firm targeting information technology companies in the western US) acquired Tier 4, a datacenter provider in Utah.

Going forward, T1R expects similar deals with smaller datacenter providers, like Tier 4, to investigate being acquired to enable access to capital for expansion. We also think private equity as well as a few of the top datacenter providers are potential acquirers in 2009. For more details on M&A activity in the datacenter space, please refer to T1R's Internet Infrastructure: Mergers and Acquisitions Second Half Update report.

Wednesday, January 28, 2009

Microsoft and Google - Are they really ceasing data center expansion?

With the numerous reports of Google, Microsoft, and others halting data center builds you would think the data center business was caving in on itself like a dying star.

Town governments as well as the state Governments where the builds are halted are groaning. Sure they gave up tax breaks (their revenue) and the payroll tax that was supposed to come won’t, but I have to wonder why they are they that concerned. Data centers have a lot of servers in them, not a lot of payroll taxes in them, so at the end of the day they are out what they would have been out anyway which is the revenue on the construction and sales tax.

So the governments are feeling what the companies are – a contraction in the economy, driven by lack of access to capital in the credit markets. So things contract.

I can’t help but wonder though, with green technology evolving quickly AND the emergence of containers to house servers and their inherent improvement in efficiency, are they really thinking and retooling for the long term?

Both are smart companies, and I learned a long time ago that smart companies will take a strategic look inside their four walls when things slow down to retool and improve for when things do expand again.

I was in a meeting today discussing PUE of containers with an organization considering a build of a new data center. If you can make a data center TWICE as efficient, for 1/5 the cost, and shift the capex to almost 100% opex - wouldn’t you do it?

I think Microsoft and Google are simply rethinking how they want to deploy computing horsepower. Containers are a very modular, very fast and very economical way to deploy horsepower. I realize fully that those of us in the traditional data center realm are looking at containers and scratching our heads, however I for one believe that they are the future of data centers – which at the end of the day house servers with CPU and memory and give us resource pools or clouds.

If that computing resource pool can be deployed at massive density, double efficiency, and at a fraction of the cost – who in their right mind wants to defend the position of paying twice as much for something half as good?

Me neither…

I am working on publishing a white paper that explores the cost and efficiency of containers vs traditional data centers and will post the link when it's done in a few weeks. If you care to share data - please do. I will change the names of the innocent if needed - I just want to make sure I put out real data.

Tuesday, January 27, 2009

Sun's new data center

I just read a story about Sun Microsystems' new data center in Broomfield, CO. There were some interesting highlights in the blurb.

Reduced Electrical Consumption: By 1 million kWh per month, enough to power 1,000 homes in Colorado

While this is a great stat, where did the savings come from? What is the PUE? Sun rolled out the Blackbox product last year - Containers/Pods are the next wave of computing as far as I am concerned - and I'll bet the container is still more efficient and the power saved in a Blackbox would power 1,000 homes on the cooling alone. You double the cost and draw of electricity in cooling a data center so unless the environment is optimized to support heat dissapation and optimize cold air flow, you don't gain much.

Reduced Raised Floor Datacenter Space: From 165,000 square feet to less than 700 square feet of raised floor datacenter space, representing a $4M cost avoidance.

I used to walk through a raised floor data center every week in a previous job and I always wondered why they were all the rage. The amount of cold air under the floor that didn't do anything was astounding. I know the electricians had a better environment to work in, but at the end of the day, no one ever asked me 'What am I paying for my share of idle cold air?'. Cooling is a necessary evil, and an expensive one. Looking not at cooling but how heat is removed is the key.

Enhanced Scalability: Incorporated 7 MW of capacity that scales up to 40% higher without major construction

What the **** does that mean? Are they using high octane diesel in their Caterpillars? Funny car fuel? A four barrel carbuerator?

Superior Cooling: The world’s first and largest installation of Liebert advanced XD™ cooling system with dynamic cooling controls capable of supporting rack loads up to 30kW and a chiller system 24% more efficient than ASHRAE standards.

I love Lieberts - they work, they're reliable, and they mask poor design. In this series of Lieberts it appears that their offering is more modular, and more deployable into trouble/hot spots than its other offerings. That is great, however they do not put the draw into the technical documentation. Efficiency is great, but if it takes 30% more power to generate 10% cooler air in the same footprint, I don't see the value.

Greener Architecture: Including flywheel uninterruptible power supply (UPS) that eliminates all lead and chemicals waste by removing the need for batteries, and non-chemical water treatment system, saving water and reducing chemical pollution.

I wonder when someone will actually have the stones to say, Green is great until it doesn't work or it costs 1.5x what traditional (proven) technology can deliver. Case in point: I wonder what the carbon output of manufacturing a wind turbine is compared to the .3 megawatt it generates. The steel manufacturing (they don't use woodstoves or solar to melt metal), the wiring manufaturing (people dressed in hemp clothing, using bamboo shovels don't find wiring deposits), the lubricants, the shipping fuel... Blindly going green because it's trendy may not be the answer. Yet. Did you cut down to one square of toilet paper to save a tree? Me neither.

Overall Excellence: Recognized with two Ace awards for Project of the Year from the Associated Contractors of Colorado, presented for excellence in design, execution, complexity and environmental application.

But not efficency? How can give an award for data center design without efficiency being the most weighted category for judging it? It is like giving out an award for best dessert without tasting it.

I love that companies like Sun are pushing the envelope and I hope it continues. I just hope that common sense enters the equation and that an actual yardstick is used to measure what matters.