It’s getting ever harder to remember what the world was like before Google and Facebook. Not only do what many people think of as simply websites play big roles in our daily lives, they also shape the physical structure of the Internet itself. And in doing so, they present both opportunities and threats to the rest of the networking industry.
Spending on telecommunications networks has long been dominated by the need to carry information over long distances. Such systems are high performance, but sold in low volumes, explained Daryl Inniss, practice leader for components at telecom research company Ovum. However, today ‘Internet content providers’ (ICPs) are spending ever more on shorter-range data centre technology, packed with connections.
‘They are pushing to get size, power consumption, cost and density in a timely manner,’ Inniss explained. ‘Faceplate density is very important, to be able to put as many transceivers or optical interconnects as possible in the front plate of a piece of equipment.’
The Internet companies’ direct influence first became notable around 2008, when they began deviating from the normal telecoms model of only buying gear from system providers. They wanted cheap and quick access to 8 Gb/s Fibre Channel transceivers, Inniss recalls. ‘Companies like Google would go directly to transceiver vendors like Finisar and JDSU, as opposed to going through the middleman,’ he said. ‘They’re looking for greater control over the ecosystem.’
These efforts at ‘disintermediation’ by Internet companies remain focused on transceivers today. They source components across all data rates, but with most revenue concentrated at 10 Gb/s and 40 Gb/s, according to Ovum. Far from simply being a message to system vendors to move faster, it is a concrete and significant industry trend.
‘On a quarterly basis we’re talking about Internet content providers buying around $250 million worth of components directly,’ said Inniss. ‘That’s about 12 per cent of the total optical component market. This is not a game – this is a real model of how the market is behaving today and will likely continue for some time.’
Beyond the data centre
In the last 2-3 years, Internet companies’ capital expenditure has also become significant in the far larger systems market, said Matt Walker, principal analyst covering intelligent networks at Ovum. ‘Components were followed by data centre interconnect applications of optical transmission products, in some cases using customised product designs,’ Walker noted. ‘2012 was probably the watershed year, when ICP capex grew to $41 billion, from $32 billion in 2011.’
This rising expenditure is narrowing the difference in capital intensity – the amount of capital expenditure per dollar revenue – between Internet firms and telecoms companies. The latter spend 17-18 per cent of their revenues on capital equipment on average globally, but the former weren’t very capital-intensive until recently. Across the sector, Internet companies’ capital intensity reached almost 6 per cent in 2014.
‘Google and Apple are the biggest spenders, together accounting for 36 per cent of total ICP capex,’ said Walker. ‘Google and Facebook spend around 15 per cent of their revenues on capex. The steady rise in capital intensity since 2012 makes clear that the group’s capex rise is not just because of their faster growth in revenue terms, but also a shift in business models that rely more heavily on network infrastructure investments.’
While these expanding investments are reaching beyond data centres, they generally continue the theme, in the form of wide area networks that connect them. However, Google is also pioneering efforts to provide last-mile access with fibre roll-outs in select US cities. ‘Facebook and others are exploring other avenues to improve connectivity, especially in the developing world, such as Facebook’s Internet.org work, which looks at unconventional options like drones and balloons,’ added Walker.
Deals to ‘like’?
Supporting the demand upsurge at the component level creates challenges along with opportunities, Inniss explained. He cites the example of Applied Optoelectronics (AOI), which was shipping transceivers to Google for a WDM-PON product to support the search giant’s gigabit fibre-to-the-home plans. Speculation suggested significant revenues would follow, but the orders didn’t emerge as planned. AOI consequently had to announce this, and its share price declined.
‘The volume demanded can be substantial from just one customer – perhaps a million transceivers in a year,’ Inniss said. ‘The component vendor may need to expand their capacity. They have to ask themselves whether they’re the only vendor, are they going to win this contract? Can they trust this customer? Some vendors go gung-ho, and they’re very successful. There are others where the market slows down and it’s not clear whether the Internet companies did not intend to buy the volumes that they initially projected, or something changed in the market. But in either case, the end result is that the component vendor is stuck in a difficult position.’
Another way that Internet firms wield their power to satisfy their technical needs is in trying to influence product formats. Inniss highlights Google’s attempt to ‘push for a 100 Gb/s solution in a hurry’ in 2010, by proposing a multi-source agreement (MSA) based on 10 x 10 Gb/s channels. ‘Finisar said 4 x 25 Gb/s is the way to go, even though it’s hard and expensive,’ he recalled. ‘They were right. The market is already looking to 400 Gb/s, which they definitely need 25 Gb/s channels to get to. All the fighting was lost time.’ Five years later, Inniss added, only one vendor has made money out of the format: Santur, now owned by Neophotonics. So by trying to support Internet content providers in the short term, components vendors stand at risk of sacrificing long-term business.
Such efforts reflect the fact that networking systems still don’t optimally support the shorter 500m-2km distance range preferred by companies like Google. ‘The IEEE standard goes to 10km, and that solution is too expensive,’ said Inniss. ‘The 10 x 10 MSA really is not suitable either.’ In 2014, four new MSAs emerged to better cover the shorter distance range (see Fibre Systems Autumn 2014, page 28). Which MSA to get involved with is an important strategic decision for component vendors. ‘The situation is still a bit unstable, so some decide that they have to play along with them all,’ Inniss explained. ‘Others pick and choose and go for what they think is going to be the big volume.’
Although Internet firms are starting to participate in IEEE standards development work, their control over what happens in their vast data centres means they don’t need to. ‘These companies can drive a unique solution given that they are spending such a large amount,’ Inniss emphasised. ‘For example, Microsoft is known to favour singlemode-type solutions within the data centre and they’ve been vocal about that for quite some time. That’s not necessarily the position everyone else is taking.’
Opening a six-pack
Another reason that approaches are not yet wholly standard is increasing data rates, added Inniss. ‘As data rate changes, then the infrastructure within the data centre may have to change. For example, as you go faster, the distance that’s supported by a multimode fibre tends to decrease. You don’t want to have to rip out the fibre infrastructure every time the data rate increases. Different companies have different philosophies on how they’re supporting these changes.’
For example, Facebook is supporting an open source approach called the Open Compute Project, to which it contributed a ‘6-pack’ switch that some see as a competitor to Cisco designs. ‘The Open Compute Project is a move to have a common platform, hardware, electronics, software to simplify data centre equipment,’ said Inniss. ‘It’s like when AT&T would say “We’re going to need this technology to move telecommunications forward”. They would start a project, companies would come, and help set specifications and standards. Facebook is creating an environment where a number of suppliers can put layers into the platform. They can mix and match options so they can end up with low cost, highly modular solutions. They hope that lots of service providers and vendors participate.’
In demanding shorter-distance technology, Internet companies are taking suppliers’ attention away from conventional telecom companies, which Ovum calls communications service providers, or CSPs. The two groups frequently compete in providing cloud-based services, but likewise they must often work together in ways more typical of the broader technology industry. With Internet companies’ revenues growing at a compound annual rate (CAGR) of 10.6 per cent from 2008 to 2013, and set to moderate only slightly, Walker underlines the telcos’ need to co-operate with them.
‘ICPs like Microsoft and Apple are important partners to CSPs in various forms – Microsoft’s Office 360 for instance is developing a network of partnerships with CSPs to push penetration,’ said Walker. ‘The tech industry has always had complex relationships among the big players. The CSP market, until recently, has been more stable and a step removed from this complexity. The CSPs now have to deal with this more directly, and one adaptation is to get closer to their sometimes-rivals in the ICP camp through partnerships.’
This changing balance of power within the optical industry to increasingly favour Internet companies also means that telecom-focused vendors are slowly becoming an ‘endangered species’, according to Ovum. ‘Increasingly we see vendors like Ciena, Alcatel-Lucent/Nuage Networks, Infinera and BTI designing solutions explicitly focused on the growing Internet content provider segment,’ said Walker. ‘We expect to see more signs of this shift this year.’