Lawyer Bait

The views expressed herein solely represent the author’s personal views and opinions and not of anyone else - person or organization.

Wednesday, February 24, 2010

Data Protection (Movement) in the Cloud

This gives us a visual representation (Heat Map) of cloud data privacy policies. This is VITAL to understand if you do any work with the Government, here's why:

The data cannot leave the United States, and if you use Amazon or other cloud offerings from global players there is not a way to insure that data and workload stays in the US.

So while your Cloud Sales rep may be local, look under the sheets and see where the data is and could be at any given point.

Thursday, February 11, 2010

Cisco Overlay Transport Virtualization (OTV) question

Does anyone know what the failover capabilities are with OTV?

Spefically I want to know how long it takes for workload to be moved and connections restored. Example:

Data Center #1 has a power issue, site switches to battery and gives me 8 minutes. Can I have another site (DC #2) up/failed over within the 8 minutes with no data loss provided that the systems are configured correctly?

Tuesday, February 9, 2010

More details on Cisco OTV

http://jasonnash.wordpress.com/2010/02/09/cisco-announces-otv-the-private-cloud-just-got-more-fun/

I was going to get into another layer and Jason did. I lifted this from his site (above):

What OTV does is that it allows you to connect two L2 domains that are separated by a L3 network. Basically, it’ll encapsulate Layer 2 traffic inside an IP packet and ship it across the network to be let loose on the other side. In this way you can make two logically separated data centers function as one large data center. The beauty of OTV is that it does away with a lot of the overly complicated methods we previously used for this sort of thing. It’s really, really simple. The only catch is that you need Nexus 7000s to do it today. How simple is it? Here is all the configuration you need on one switch in your OTV mesh:

otv advertise-vlan 100-150
otv external-interface Ethernet1/1
interface Overlay0
description otv-demo
otv site-vlan 100
otv group-address 239.1.1.1 data-group-range 232.192.1.2/32

That’s six lines, including a description line. Basically, you enable OTV and assign an external interface. The switch, like all good little switches, keeps a MAC table for switching frames but for those MACs on the other side of the L3 network it just keeps a pointer to the IP of the far end switch instead of an interface. It knows that when a frame destined for a MAC address on another switch arrives to encapsulate it in to an IP packet and forward it out. The switches all talk to each other and exchange MAC information so they know who is where. This communication of MAC information is handled via a multicast address. Very simple, very elegant. All done without the headaches of other tunneling or VPN technologies.

Monday, February 8, 2010

Cisco's OTV - Overlay Transport Virtualization

I was taking a look at Cisco's OTV capability they are rolling out on the Nexus gear and my first impression was - wow!

The gist of it is: OTV is a new feature of the Nexus OS operating system that encapsulates Layer 2 Ethernet traffic within IP packets, allowing Ethernet traffic from a local area network (LAN) to be tunneled over an IP network to create a “logical data center” spanning several data centers in different locations. OTV technology will be supported in Cisco’s Nexus 7000 in April 2010, and existing Nexus customers can deploy OTV through a software upgrade.


Cisco says its overlay approach makes OTV easier to implement than using a dark fiber route or MultiProtocol Label Switching (MPLS) over IP to move workloads between facilities.

“Moving workloads between data centers has typically involved complex and time-consuming network design and configurations,” said Ben Matheson, senior director, global partner marketing, VMware. “VMware VMotion can now leverage Cisco OTV to easily and cost-effectively move data center workloads across long distances, providing customers with resource flexibility and workload portability that span across geographically dispersed data centers.

“This represents a significant advancement for virtualized environments by simplifying and accelerating long-distance workload migrations,” Matheson added.

My opinion on why this is important centers around Layer 2. Layer 2 is where peering happens. Peering allows companies to move data around without paying for it. Bandwidth providers agree to pass traffic from one network to another via a cross connection between the two networks. Instead of buying a point to point OC-192 between data centers, a company would colocate IT gear in a data center on a peering point, buy a port on a peering Exchange like Any2, and cross connect to other networks who can move traffic around on other networks at Gig and 10 Gig (and up) speeds. The connections are in Layer 2.

A pertinent example would go like this:

An online gaming company colocates 50 cabinets in One Wilshire or 900 N. Alameda. They buy a port on the Any2 Exchange and set up 5 cross connections to different networks who peer at One Wilshire and 900 N. Alameda - Level 3, AboveNet, Tata, Vodaphone, and NTT as an example. As they expand into their global footprint, they can move VM's - workloads and game play - around from one data center to another using the cross connects, and not have to have large pipes, point to point, from one facility to another.

Another example would be a US based space agency I have done some work with, has containers that house a cloud offering (OS) in them. One of their satellites takes 100 pictures of the rigs of Saturn one morning and needs to distribute those massive images to thousands of constituents worldwide. In the past they may have purchased multiple 10 Gig pipes from their Center to a handful of hubs they interconnect with. Big money for big bandwidth which they need. Using this OTV technology, they buy a fat pipe from their Center to 55 S. Market in San Jose (3 miles or so), buy a port on an exchange that peers there, and now they can move those photos, videos, etc. to their other hubs who use the same peering exchange and not have to pay for the bandwidth between 55 S. Market in San Jose, CA and say Chicago - 2167 miles. This pays for the deployment of other containers where there is 100% green power that is cheap, can use peering to expand the footprint's network, and if a better spot becomes available, they move the container after offloading the workload to another container or two.

This is one of those game changing technologies in how people can deliver compute to customers. For large scale deployments, especially those that must be green and use wind generated power or nuclear generated power, this is a huge advantage. You know have the ability to drive physical and virtual movement of workload based on criteria other than - is there (do we have) a data center there.

I will be watching this solution closely.

Wednesday, February 3, 2010

The Green IT shell game

I was doing research for an RFP yesterday about carbon footprints as they related to PUE, Data Center Operations, and IT resources in general. What I realized was that IT still looks at itself in silos, not systems. Let me explain...

This morning I got my copy of eWeek and on the cover was a pointer to Green IT Solutions - The Real Deal - on page 16. I'll admit I didnt spend a lot of time on the article because it was just like 100 others I have read the past few months that follow a simple formula:

Virtualization = Green

Um, not exactly...

Let me take you through a systems view of carbon in the data center operations:

The PUE of most data centers is 2.0 or higher. This means that for every dollar spent on powering servers, an additional dollar is spent on common facilities electricity to support systems. In the container model. The PUE is 1.2 which means 80 cents of every dollar captured for common facilities/infrastructure is saved. For a 500 KW deployment with a $0.10 per Kwh charge it means that in a data center with a PUE of 2.0, the cost is $73,000 for electricity. With a PUE of 1.2 the cost is $43,800. Savings per month of $29,200, and all of the electricity is green.

To produce a Kw of electricity from coal, 2.3lbs of carbon are produced (see http://cdiac.ornl.gov/pns/faq.html), so for a 500Kwh environment, multiplied by 2.3 lbs per Kwh is 839,500 pounds or 419 tons per month. Since wind eliminates 98% of carbon emissions, the net carbon footprint per month drops to 8.4 tons per month (see http://www.parliament.uk/documents/upload/postpn268.pdf)

When I look at a virtualization = Green example I see some major gotchas:

1. New equipment will likely need to be purchased. The manufacturing process is not carbon-light.

2. This means more mercury and other nasty stuff in the new equipment plus the old equipment. recycling gets you a couple of brownie points.

3. If a new data center is constructed, or leased, or expanded there is the cost of manufacturing, transporting, and assembling all of the components. If it's still powered by coal - you gained nothing

My point being Virtualization is NOT/KINDA Green and has a measureable but not significant reduction in carbon footprint. My personal stance is that you are far better off getting your utility to get wind energy into your data center and cut carbon by the boatload. Better yet, get wind produced power, use an existing data center someplace cool and open the windows when you can.

The other thing I found very amusing in my research was the data center with a LEED Platinum rating and was powered by multiple 50+ year old coal plants. It's like tinting mercury pink to make it 'safe' isn't it?