Six factors slowing adoption of converged I/O

So if converging the I/O infrastructure in data centres is all the rage, what’s taking IT shops so long to do it?

Six reasons:

• New technology attempting to replace proven and reliable implementations.

• New equipment requirements.

• New standards and proprietary techniques to consider.

• Organizational and operational changes.

• Infrastructure management and stability.

• And, questionable benefits beyond the server and access switch layer.

Converged I/O – running LAN and storage data through the same wires and switches to reduce elements and cost – is a years-long journey though, not an endeavor to be rushed or taken lightly. IT shops have to weigh their situation carefully and know where they want to go, and how and when to get there, before embarking.

Generally, converged I/O constitutes three key elements: 10Gbps Ethernet, Fibre Channel-over-Ethernet (FCoE) and Ethernet equipped with the lossless Data centre Bridging (DCB) standard from the IEEE. FCoE, which tunnels Fibre Channel storage traffic through Ethernet, requires DCB in order to have Ethernet behave as if it had the resiliency of Fibre Channel – lossless data transmission.

According to Dell’Oro Group, FCoE realized $94 million in revenue in the second quarter of 2011, on a shipment of 210,000 ports. The research firm expects 930,000 ports to ship this year, accounting for $422 million in revenue – or about 7% of 10G Ethernet revenue.

But there are many standards and emerging standards to consider – as well as proprietary vendor schemes — when evaluating a converged I/O infrastructure throughout your data centre. Currently, those standards are in place for FCoE and DCB at the blade server and access switch level; and FCoE is largely a free technology feature of 10Gbps Ethernet switches and converged network adapters.

But standards and methods for extending converged I/O from the server rack and access switch, where it is now taking hold through the core of the data centre network, are still percolating. Indeed, some of these standards are competing to become the de facto technique for enabling multipath networking and multihop FCoE capabilities.

Xsigo is a maker of virtual and converged I/O infrastructure products – namely, its I/O Director and Server Fabric platforms. In 2010, privately held Xsigo more than tripled its revenue from the year before, so the company sees a lot of demand for and sales of converged data centre I/O gear.

“When servers ran one application per server and that application did not change, you’re typically only dealing with a few network connections per server, and pretty low utilization of those connections,” says Jon Toor, vice president of marketing for Xsigo. “When you virtualize a server you’re dealing with a lot more connections, a lot more workload and it creates a need for a different way of hooking things up.”

Xsigo, though, pitches its products as alternatives to having to deploy FCoE to achieve converged data centre I/O. Cisco, the market leader in FCoE switches, has been attempting to undermine Xsigo’s strategy and company stability.

HP, which enjoys a 20% share of the FCoE blade switch market, says more than half of workloads will be virtualized by 2012, which puts additional strain on the access network.

“Customers are deploying six to eight Gigabit Ethernet connections because virtualization requires more bandwidth out of the servers,” says Kash Shaikh, director of marketing for HP Networking. “That amount of cabling blocks airflow. (With converged I/O) you can take that down to two 10G connections coming out of the server and into the first hop switch.”

In addition to server virtualization, other converged I/O drivers are cloud deployments, infrastructure flexibility, an increasing amount of server-to-server traffic, and consolidation of I/O density with virtual machines, says Shaun Walsh, vice president of marketing with Emulex. But the march to converged I/O will be gated on the amortization cycles of IT shops and the question of who and what will manage the new infrastructure.

“They have existing infrastructures that they need to amortize over time to get the full value from them,” Walsh says, which is usually three to five years.

Management of the infrastructure from both an operational and organizational perspective will have to be considered carefully as well, he says. LAN and SAN teams manage segregated data and storage networks now with likely different operating methods.

“The biggest challenge for organizations is not physical deployment, it’s the policy and management deployment,” Walsh says. “Sit down with the teams, make sure that they have a meeting of the minds on what the purpose is, why we’re doing it and who’s going to manage what segments of it.”

There can be some hesitation when it comes to combining traffic from currently isolated networks.

“What are the security implications of adding Fibre Channel to an IP-driven environment?” Walsh asks. “That’s always one of the big concerns storage administrators have expressed to us.”

Operationally, IT shops should have no less management capability than they had before converging, Walsh notes. They should have the same, or at least a familiar set of tools to work with. It is, after all, Ethernet.

“If there’s any Achilles’ heel it would be the management tools,” Walsh says. “The only real risk I see to adoption is that the management tools mature to the same level that IT managers have today. But I don’t see anything that’s going to disrupt this.”

Nonetheless, the savings seem compelling: Customers are saving 30% to 50% in capital expenditures, 50% to 60% in blades and cooling, and 70% to 80% in cabling, Walsh says. And HP says two FCoE-enabled blades can replace up to 217 separate piece parts – Ethernet network interface cards, Fibre Channel Host Bus Adapters, etc. – with rack mount servers.

But that still may not be enough to sway the masses.

“The one thing ROI doesn’t measure is stability,” Walsh says. “IT guys put stability very, very high on their list of things. They’ve got to have another reason to move to take that stability risk. That’s why these other external factors drive it more than the core ROI factors.”

Cisco says stability is intrinsic to FCoE, and that’s why about one-third of the company’s Nexus 5000 data centre switches are deployed with active FCoE licenses. The Nexus 5000 racked up 50% of the FCoE ports shipped in the second quarter, according to Dell’Oro, and Cisco also has more than 7,000 UCS customers — FCoE is an integral technology to UCS.

“Operationally, it is consistent with Fibre Channel,” says Omar Sultan, senior manager of data centre architecture at Cisco. “And at the end of the day it’s Fibre Channel. It’s just a different transport. For the installed Fibre Channel base, it’s not a huge leap. The things they’re used to working with continue to work.”

Perhaps, but for some Fibre Channel users it could be a huge leap, according to Fibre Channel market leader Brocade. Every customer is different and some may find the cost savings elusive.

“The economics are not always there today,” says Doug Ingraham, vice president of product management At Brocade. “Buying separate Fibre Channel and Ethernet connections can be less expensive than the 10G Ethernet connections we use for FCoE and Data centre Bridging. But that varies by customer.”

Ingraham acknowledges that the savings benefits of FCoE will be realized over time at the server access and access switch layer. Deeper into the data centre network towards the core, however, the benefits may not accrue.

“If you’re going past the top-of-rack or the first hop of the network then you start getting into, will it really make sense to converge your data and storage traffic across the same network?” Ingraham asks. “Because you’re starting to pump lots of traffic across aggregated nodes that are oftentimes more expensive on a price/port than if you were to just keep your data and storage networks separate, regardless of technology.”

HP concurs that FCoE toward the core of the data centre network does not make sense.

“At this time, we believe the maximum benefits are at the access layer because this is where the cabling is,” Shaikh says. “As you go deeper into the network there is not as much cabling – the switch ports reduce because you continue to aggregate. Some of the proprietary implementations of core convergence still require an Ethernet switch and a SAN director. So I really question some of the cost savings there.”

And keeping FCoE networks separate actually eases management, Ingraham claims, and may reduce capital equipment costs by requiring fewer aggregation points between those two networks. It may also ease organizational and operational stress as well.

“The thing with FCoE and converged I/O overall is, it’s not just technology,” Ingraham says. “There’s a lot of other things that customers have to be cognizant of: There’s organizational structures – different networking and storage teams, how do they bring those together; how do they change their operating policies and procedures – now you have one switch doing both. Who owns that now? Who has the rights to management? Who has the rights to make changes? These are a lot of times more important questions than can we do this technically.”

Cisco says FCoE is actually responding to those organizational shifts.

“The changes were happening in the first place,” says Arnab Basu, UCS product manager at Cisco. “The organizations were having to break silos and integrate at a unprecedented level, even without FCoE. FCoE just enables or helps in that transition.”

“From a day-to-day perspective, they pretty much did what they did before using the tools they did before,” Sultan says, adding that the network people most times end up owning the FCoE switch.

The Cisco officials say they have not heard of any customers experiencing significant hurdles or challenges in implementing FCoE from an organizational, operational or technical perspective. Brocade is seeing virtually no demand for FCoE or converged I/O from its installed Fibre Channel base, Ingraham says.

“It’s really the economics,” he says. “It’s new technology, and the storage networking side of the business is particularly risk averse due to the criticality of those networks and the effects when they go down. New technology adoption there has to be proven.”

But the point is not FCoE, Cisco says; the point is the benefits of converged I/O.

“We don’t need to drive FCoE adoption specifically as much as we’re trying to get our customers to see the value of converged infrastructure because we think that will pay off for them in the long run – regardless of how they end up doing it,” says Cisco’s Sultan.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Related Tech News

Featured Tech Jobs

 

CDN in your inbox

CDN delivers a critical analysis of the competitive landscape detailing both the challenges and opportunities facing solution providers. CDN's email newsletter details the most important news and commentary from the channel.