Skip to main content

Changing of the guard

Networking has become an inhibitor in the data centre. Whereas server and storage technologies have progressed steadily in the last decade, networking has remained largely the same. But the growing discrepancy has told, putting networking under the spotlight and triggering an industry response.

Last November, Cisco Systems launched its broad-reaching Application Centric Infrastructure. VMware and others are promoting network virtualisation that places a network overlay above the physical network to simplify server communications. A third approach, software-defined networking (SDN) using the OpenFlow open standard, is being pursued by the likes of HP.  

‘What you are seeing is a wave of catching up,’ says Marten Terpstra, director of product marketing at Plexxi.
Traditionally, applications have resided on dedicated ‘bare metal’ servers but usage has been low – 10 to 20 per cent, typically. Virtualisation and the use of hypervisor software splits a server’s processing into time slots to support tens of virtual machines, each with its own application and operating system, boosting usage to 70 per cent.

Virtualisation has been adopted to boost server performance, but its use has impacted data centre networking.

The introduction of server virtualisation has given rise to virtual machine mobility and automation of workloads, and this is where networking becomes the laggard. Setting up server links requires the filling out of work orders and IT networking staff. This can take weeks to set up, whereas configuring virtual machines on servers can be performed in minutes.

‘It is holding you back from getting all the benefits of server virtualisation,’ says Houman Modarres, senior director product marketing at Nuage Networks. ‘Connectivity is configuration-driven, which means it is operationally complex, to the point that you want to do it less often.’

Responding to demand
In turn, newer data centre applications – cloud, high-performance computing, desktop virtualisation, and virtual machines migration – are changing the nature of the traffic flow in the data centre. Traffic has traditionally flowed up and down tiered switching, comprising top-of-rack, aggregation, and core switches. The newer applications generate significant traffic between servers and between virtual machines, requiring flatter, less tiered architectures to accommodate the greater horizontal ‘east-west’ traffic.

‘These workloads are not predictable; as an administrator, you can’t figure out how much east-west traffic there is,’ says Arpit Joshipura, vice president, product management and marketing at Dell Networking. ‘What you want is an agile network to respond to these workloads in real time and without human intervention.’  

It is not only virtualised servers that generate east-west traffic. The large Web 2.0 companies – Facebook, Google and the like with their hyperscale data centres – use dedicated servers that generate considerable horizontal traffic.

Ultimately, what customers want is a certain functionality that enables a virtual machine to talk with other virtual machines, whether they are on the same server, on different servers in a data centre, or on servers in different data centres.

Segregating and managing the increasing number of workloads in the data centre using traditional layer 2 networking mechanisms such as virtual LANs (VLANs) has created management and scale issues. VLANs are widely used but are limited to 4,096 per domain. The need to manage and provision multiple customers’ workloads and virtual machines across many domains has become burdensome.  

Embracing new approaches
The industry has been looking elsewhere to tackle the communications challenges, and this has led to interest in SDN. Initially associated with OpenFlow, SDN has come to be viewed more as an architectural framework than a technology.

Switches and routers in traditional networks communicate among themselves to determine the required links. SDN does away with such distributed control, favouring a centralised view and a decoupling of the control and data planes. The result is a software component – comprising one or more programs – that sits outside the network and controls a portion of it. ‘This software entity controls how the network behaves, how it is provisioned, and how it forwards traffic from one part of the network to another,’ says Terpstra.

Network virtualisation is one approach that has emerged to address the data centre’s networking challenges. It embraces SDN’s software entity concept, using a controller to oversee the network. The controller knows the connectivity, and effects change using an overlay network on top of the physical network.

‘You provide an abstraction layer and you do networking on top of that layer rather than do things to touch the hardware natively,’ says Chris King, vice president, product marketing, networking and security business unit at VMware. ‘The implication of that is that you have to faithfully reproduce the entire network in that abstraction, so that the applications riding on top of it are unaware they are not touching the hardware.’  

Network virtualisation simplifies and speeds up network provisioning. Once the physical network is set up, the overlay takes care of connecting the data centre resources, enables automation and reduces provisioning times. The abstraction layer, by decoupling from the underlying physical network, promises a further benefit. ‘It means I refresh my hardware when I need to refresh my hardware, not when I need new features,’ says King. ‘That is truly disruptive.’

VMware with its NSX network virtualisation and security platform is working with several switch vendors including Arista Networks, Brocade, Cumulus, Dell, HP, and Juniper. Another proponent of network virtualisation is Nuage Networks, Alcatel-Lucent’s spin-in company, with its Virtualised Services Platform software.

Combining networks seamlessly
Cisco Systems has adopted a different philosophy for tackling network automation and scale. Its Application Centric Infrastructure (ACI) is being launched in two stages. The first, a standalone phase, equips its Nexus switches with an interface to enable control using such tools as OpenFlow and Openstack. Openstack is an open-source cloud computing platform that controls servers, storage, and networking resources.

The second phase implements the full vision of ACI. Here the platforms will implement Cisco’s fabric mode once it issues a software upgrade. Cisco has developed a custom ASIC to implement what it calls a hierarchical policy model. The policy layer sits above the control plane. Here Cisco has eschewed SDN, implementing ACI’s networking as a single system based on a distributed control plane that resides on all the platforms.    

‘Our approach with ACI is how can we bring the overlay and the physical network together to work in a seamless way,’ says Greg Page, technical solutions architect, data centre and virtualisation, EMEAR at Cisco. ‘How can we conjoin them to get the benefits of overlay – speed and flexibility of deployment – but also get the assurance of the underlay network: performance, speed, packets-per-second processing, security, and scale.’  

These underlay-network benefits result from using custom silicon, says Page. An ASIC delivers a tenfold packet-processing performance advantage compared to software on a server’s general-purpose processor, and an even greater twentyfold power efficiency. Cisco’s ASIC performs the packet processing to enforce policy that determines which applications and which virtual machines can talk to each other and how.

Having visibility into the physical network brings operational benefits, says Page: ‘If I’ve deployed an overlay and I am losing packets, I don’t know where the problem is because I can’t correlate between the physical and the virtual.’  

SDN and the OpenFlow protocol used to control the data plane is a third approach being pursued to address networking’s shortfalls. Cisco’s standalone mode supports this, as do other vendors. HP, also supporting network virtualisation, is a proponent of SDN and OpenFlow.

Supporting three approaches
Dell claims its solution supports network virtualisation, SDN and legacy/vendor-specific systems, and that they will play a role in the data centre. ‘We are the only ones making sure that all three camps can be migrated to SDN,’ says Joshipura. ‘Customers should not be required to choose an approach.’

‘The market is not going to go from a fully distributed routing control plane that has been built out to internet scale for the past decade to purely a controller model,’ adds Martin McNealis, senior director, EOS, cloud services and technical support at Arista Networks. ‘So where the market has got to is a hybrid model: for certain applications or virtualised instances you want a programmable controller but for a broad set of traffic users, you may want to use existing mechanisms.’

Meanwhile, all the equipment makers continue to advance the underlying physical network. Brocade, whose platforms support VMware’s NSX, points out that the overlay approach does not necessarily address every requirement in the best way. For example, managing multiple workloads: ‘Is that best addressed by a quite complex overlay technology or a relatively simple underlay technique that is built into the infrastructure you have invested in?’ says Nick Williams, senior product manager, EMEA, data centre IP at Brocade. The company’s virtual fabric uses protocols and hardware that extends VLAN’s limit to enable up to 16 million VLAN connections, giving its customers the choice of using its network implementation or network virtualisation.  

Another vendor, Plexxi, has developed a switch architecture that adds optical networking to its switches to complement layer two and layer three networking. ‘You want to have a network that understands what the overlay network looks like,’ says Terpstra, describing Plexxi’s approach as an ‘underlay that is overlay-aware’.

Plexxi says applications have certain patterns: some require low latency, others high bandwidth. Plexxi’s system extracts information from Openstack or from the network virtualisation platform if used. Using information regarding the applications and where they reside, it can calculate the likely points in the network where high bandwidth and low latency are needed. ‘We can look at what is the network you need and we will adjust how the network is constructed; where those wavelengths go from one switch to another based on need,’ says Terpstra.

The fact that industry is responding to the networking challenge is no bad thing, argues Brad Casemore, research director, data centre networks at market research firm IDC. ‘What we are seeing is a real variation of needs and requirements across the customer spectrum,’ he says.   

The hyperscale data centre operators run networks to confer competitive advantage. This has led them to explore commodity ‘white-box’ switches and SDN, and to a degree network virtualisation. ‘All want a flexible, adaptable networking model that closely conjoins to the application workloads,’ says Casemore. ‘These hyperscale players have the impetus and the skills to do things differently.’

In contrast, cloud service providers need isolated workloads for their multiple tenants, using virtualised servers. They want networking that supports their business models, and network virtualisation technology is a natural fit.

Enterprises form a much broader spectrum, but can be split between large players and the rest. Large enterprises, such as financial services players, have IT requirements spread across multiple data centres. As such, they share the characteristics of the hyperscale internet players. But the remaining enterprises, where IT is not their primary business but a critical support, have IT staff with more limited skill sets. And it is this broader enterprise segment where Cisco believes it can largely retain with its ACI model, says Casemore.      

The industry consensus is that while the market will shape which of the main approaches will succeed in the coming years, the concept of SDN and the controller is here to stay. Another such development, not strictly related to SDN, is the separation of hardware and software, dubbed network disaggregation.

Will these two trends change the relative roles of hardware and software? Opinion is mixed. Cisco has demonstrated it will use merchant switch-silicon from Broadcom where it makes sense from a cost-performance standpoint, while using its own silicon to add competitive advantage. Yet Plexxi, despite its optical networking scheme, believes software will become the key differentiator.

‘Hardware is just a tool,’ says Terpstra. ‘Our differentiator is not so much the hardware, but our maths algorithms’ – the algorithms that take application data from the management layer and calculates the best optical layer configuration.



Network virtualisation

Network virtualisation is still a new technology, with deployments only beginning in the last year. ‘Network virtualisation today is where server virtualisation was three or four years ago,’ says Nuage’s Modarres.

‘It [network virtualisation] maintains the same virtual environment that exists in servers so that it can be projected across the network to other servers,’ says Nick Ilyadis, CTO infrastructure and networking group at Broadcom.

Broadcom makes the StrataXGS Trident II Ethernet switch family, adopted by several switch vendors including Cisco, which has hardware support for network virtualisation.

Several elements are used for network virtualisation. The management or policy software, the controller, overlay protocols and the hypervisor’s virtual switch. The virtual switch connects virtual machines to the network, while the controller maintains the configuration state of the network.

‘Layer 2 and Layer 3 connectivity, and Layer 4 through 7 services associated with the workloads; all that is maintained in the controller,’ says VMware’s King. The controller also implements the changes when connectivity is required. Meanwhile, the management engine sets policy, the rules associated with workloads that are enacted by the controller.  

An overlay establishes the connectivity between hypervisors. Three overlay protocols are commonly used to create tunnels across the network: Virtual Extensible LAN (VXLAN), Network Virtualization using Generic Routing Encapsulation (NVGRE), and Stateless Transport Tunnelling (STT). In addition, some vendors also have proprietary overlay schemes.

‘It [network virtualisation] is agnostic to the underlying physical network,’ says IDC’s Casemore. ‘The only thing it uses that networking for is simple IP forwarding.’ That works fine when the IT environment is virtualised. But to accommodate workloads on dedicated servers, a virtual tunnel end point (VTEP) in needed. The VTEP is a gateway between the physical and virtual worlds, and can translate between the different overlay protocols.  

‘Network virtualisation gives the data centre operator a very easy handle by which to manage the traffic,’ says Ilyadis. The overlay separates customer traffic and uses identifiers that define how traffic is treated. ‘You don’t have to provide traffic management, policing and service assurance by looking at individual MAC addresses or IP packets,’ says Ilyadis.

The biggest benefit of network virtualisation is workload expansion, by simplifying the creation of virtual machines and their connectivity. ‘All the attributes of that network now exist as opposed to having to recreate them manually,’ says Ilyadis. ‘It sounds easy but when you have thousands of customers, automating a particular session buys you a lot of leverage.’

Topics

Read more about:

Data centre

Media Partners