Skip to main content

Breaking the glass ceiling

The optical fibres that crisscross our globe are often described as having ‘infinite capacity’, but that’s not strictly correct. While optical fibres do represent an extremely efficient communications channel – a single optical fibre can carry more information than all of the wireless spectrum combined – their capacity is not truly unlimited.

The physical limits are not relevant to an individual consumer, but they are extremely pertinent in the core of the network, which carries the aggregated data from millions of users simultaneously. While operators and their equipment suppliers strive to maintain the illusion of ‘infinite capacity’ with uninterrupted service to their customers, in fact they are working hard behind the scenes to keep pace with spiralling data traffic growth.

Fibre capacity advanced by leaps and bounds during the 1990s and early 2000s, first by speeding up the baud rate – mainly thanks to better electronic circuitry – and then through the dramatic increases enabled by wavelength multiplexing. By populating the fibres with more and more wavelengths, each supporting another communications channel, operators were able to boost the overall transmission capacity of their networks by several orders of magnitude.

But now the advances have slowed (see Table 1, page 22). Once the current crop of research innovations has reached the market, then advances in optical hardware could simply stop. That’s because there are fundamental physical laws restricting the amount of information that can be carried on a single optical fibre. In fact, it’s possible to put a number on how much capacity an optical fibre can support in theory, says Professor Andrew Ellis, from Aston University in the UK. ‘It’s something of the order of 100 terabits in total, if you use all of the tricks,’ he stated.

The leading-edge products from optical systems suppliers are only a factor or two or three away from those theoretical limits, and laboratory experiments are getting even closer. In March, for example, Nokia announced an optical transport system supporting 70Tb/s across both the C and L wavelength bands that will be ready to ship in 2017. Meanwhile, operators like BT are already installing multiple terabits on their hottest routes, according to Andrew Lord, head of optical research at BT. 

Capacity on BT’s core network is growing at about 65 per cent annually, he notes, although globally the increase in network traffic is closer to 35 per cent. But even at 35 per cent annual growth, operators will require petabit capacities on their most congested routes in just over a decade, he says. Other carriers have different growth rates, but whichever numbers you pick, they all say fundamentally the same thing – substantially more capacity is needed surprisingly soon. And that’s a big problem. ‘In two years’ time they’ll be at the limit of current products; in about 10 years’ time they’ll be at the theoretical limits, assuming the demand keeps doubling every two years,’ said Professor Ellis. 

The impending ‘capacity crunch’ was the subject of a meeting at the Royal Society in London last year, co-organised by Ellis, where researchers met to discuss the urgency of the issue. Running out of capacity on a fibre is not a catastrophe – operators always have the option to install new fibres, but that approach doesn’t scale well in terms of cost. To keep down the cost per bit, both for the operator and the end customer, the cost of additional capacity also needs to reduce year on year, and that could require some fairly spectacular innovations.

‘It’s not a doomsday scenario, it’s more of a squeeze than a crunch, and when you get a squeeze, you’ve got to choose what to do with your budget,’ said Professor Ellis. ‘We can face the consequences of a scarce resource and have limitations on what we can with that bandwidth and pay more. Or we can squeeze, squeeze, squeeze [more capacity from the fibre] and postpone the problem. The question is, what do people want to do?’

Old light, new tricks

Claude Shannon in 1948 published a seminal paper that outlined the theory behind reliable communications, even in noisy channels. He computed the theoretical maximum capacity of a communication channel for a given signal-to-noise ratio. Below the Shannon limit, it is possible to transmit a digital signal without errors. Beyond this limit, no useful information can be transmitted.

Since the amount of noise is fixed, the transmission capacity in an optical fibre can be increased by raising the signal power, until the point where the light becomes so intense that nonlinear effects start to take over. ‘The first thing that happens is the refractive index increases slightly,’ Ellis explained. ‘This means light from one user starts changing the refractive index for another user. In other words, you’ve got crosstalk.’

Although the nonlinear component of the refractive index is very small, it can be the dominant effect in optical fibres carrying as many as 80 wavelengths or more. ‘Current [optical] systems are running at just about the same intensity as the sunlight just past Mercury. It’s not surprising that the material reacts to that amount of light,’ commented Ellis.

One of the ‘tricks’ to which Professor Ellis referred earlier would be to increase the nonlinear limit. ‘These nonlinearities are not random, their effect is known and can be calculated, and then compensated for,’ he explained. One such technique is called multi-channel digital back-propagation, and it requires the receiver to capture and jointly process the interacting channels – a super-channel receiver, if you will.

Indeed, researchers from University College London (UCL) have already demonstrated this approach in the laboratory, using it to almost double the distance that a seven-carrier super-channel could be transmitted error-free, achieving a distance of 5,890km. The work was carried out as part of Unloc, a £4.8 million collaborative project with Aston University, UCL and industry partners, funded by the UK Engineering and Physical Sciences Research Council (EPSRC), that aims to find new approaches to ‘unlock’ the capacity of future optical communications systems.

‘Once we figure out how the channels interact with each other, we can devise new processing techniques that replicate this journey, but in the digital domain. The virtual digital journey is then carried out on a computer using some exciting digital processing techniques,’ Robert Maher, research associate at UCL’s Department of Electronic and Electrical Engineering explained.

Although the concept is straightforward, the processing is complicated. Professor Polina Bayvel’s team at UCL are trying to calculate how much energy the processor would require to implement this. ‘It’s not disastrous, but it is a factor of four or five higher than if you weren’t doing the nonlinearity compensation,’ according to Ellis.

Obviously there’s a trade-off in computational complexity, as he points out: ‘I don’t want to calculate the impact of the entire internet on the signal – there’s 100 terabits of it. That would get me perfect compensation of the nonlinearities, but it’s far too complicated. But I can calculate some [of the nearest neighbour channels] and get rid of most of the nonlinearities.’ Like trying to hear a conversation at a party, ejecting the loudest people in the room will improve reception, leaving just a background murmur.

To transpose this technique into the real world will require real-time processing on dedicated silicon, rather than on a desktop computer. ‘If you were willing to invest the money for making an application-specific integrated circuit, certainly you could do [the calculations] in real time,’ agreed Ellis. By doing this we should be able to double the capacity on the fibre, he adds.

This nonlinear capacity limit can be stretched even further by combining digital signal processing – this time using pre-compensation at the transmitter – with a frequency-stabilised light source. Researchers from the Photonic Systems Group at the University of California San Diego, led by Professor Stojan Radic, reasoned that they could not compensate for all of the nonlinear effects because of the frequency instability of the optical carriers. Using a frequency comb light source should therefore make all of the nonlinearities totally reversible. If the fundamental laser frequency changes, all of the ‘teeth’ in the comb move in step, so their relative frequency does not change and the nonlinear interaction between different channels is unaffected. 

The researchers went on to prove this in the laboratory. The experiment, in which they successfully recovered all the data from frequency-locked carriers transmitted over 12,000km, was reported in the June 2015 issue of the journal Science. ‘After increasing the power of the optical signals we sent by 20 fold, we could still restore the original information when we used frequency combs at the outset,’ said UC San Diego electrical engineering Ph.D. student Eduardo Temprana, the first author on the paper. Although they only used three and five channels in the experiment, the technique can in principle be extended to many more channels.

Looking longer term, other candidate techniques for compensation of nonlinear impairments are being investigated. One of them, which could potentially improve fibre capacity by a factor of three – or 50 per cent more than the compensation techniques previously mentioned – is advanced pulse shaping at the transmitter. Nyquist pulse shaping is already being used in state-of-the-art optical systems to squeeze channels as close together as possible in frequency without overlapping. Other types of pulse have unusual propagation properties. An example is the soliton – a Bell-shaped pulse where the dispersion perfectly balances the intensity-induced nonlinearity, so that the pulse maintains its shape as it travels over very long distances.

‘The principle is that there are waveforms that propagate without distortion – or with very easy-to-compensate-for distortion – and that is one of the things we’re now looking at,’ explained Ellis. Although they’re much more complicated waveforms, they propagate in a very simple fashion. Using the nonlinear Fourier transform, a simple calculation at the receiver can get rid of the nonlinear effects, to extract the information. ‘It should be simpler to implement but it’s very, very new, so we haven’t yet worked out the most efficient way of doing that nonlinear Fourier transform,’ said Ellis. There have been no large-scale experiments to date, he notes.

It’s all done with mirrors

A third approach for capacity improvements – the most promising of the approaches according to Professor Ellis – is ‘optical phase conjugation’, which literally flips the signal over in frequency, so the distortions accumulating in the first half of the light’s journey along the fibre are undone as the light travels along the second half of the route.

The method is analogous to Newton’s prisms, where the first of two identical prisms spreads white light out into a rainbow, and the second inverted prism collects the colours back together again. Since researchers don’t know how to create an ‘inverted fibre’ to perform the reverse process, they use a special ‘mirror’, the optical phase conjugator, instead. ‘We’re mirroring in frequency, we make the reds blue and we make the blues red. What that means is the effects in the second half of the fibre are the opposite of the effects in the first half,’ he explained.

In reality the device is an optical waveguide or fibre with very high nonlinear refractive index and a high-power laser that defines the point in frequency where the mirroring should occur. This phase conjugate mirror is placed right in the middle of the link – the two halves must be symmetrical or the method doesn’t work effectively. The second half of the link then compensates for the predictable impairments across all of the wavelengths. ‘There’s a little notch of the spectrum when you look at the detail that you can’t use, but apart from one per cent of the spectrum that is wasted, we can compensate for everything on all of the wavelengths,’ said Ellis.

Having shown that this was theoretically possible, Ellis and his colleagues from the Unloc project went on to demonstrate an optical phase conjugator device working in the laboratory. The study, published in the March 2016 issue of Journal of Lightwave Technology, showed for the first time that an optical phase conjugator placed in a 2,000km fibre link could successfully deal with nonlinearities in an optical system with data rates ranging from 800Gb/s to 4Tb/s. This achievement has since been replicated by researchers from NTT in Japan, who were able to send data rates of up to 16Tb/s over a distance of 3840km using an improved version of the optical phase conjugator device. Their work was reported in the July 2016 issue of Optics Express.

To apply this technique in the real world would require changes to the way line systems are designed, however. In practice the ‘mirrors’ would be dropped into the link at intervals, perhaps at the locations of today’s amplifier stations. The technique works better with shorter spans combined, rather than with one mirror in the centre, because the signal doesn’t disperse so much on each section. Raman amplification would also be preferable to erbium-doped fibre amplifiers, which break the symmetry of the transmission link that is vital for the optical phase conjugator to operate effectively.

Unlike digital back-propagation, which requires substantial processing power, leading to significant – and possibly prohibitive – increases in energy consumption, a single optical phase conjugation device can handle large bandwidths and multiple channels with ease, according to Ellis. It would provide an order of magnitude increase in the signal-to-noise ratio rather than factors or two or four, and it can be combined with digital signal processing-based methods, he claims.

Independent spatial paths

Only once scientists and engineers have used up all the tricks in the optical toolbox to increase the capacity of standard optical fibres, will serious attention turn to the fibres themselves. ‘Operators will use the existing fibre as long as they can,’ said Peter Winzer, head of optical transmission systems and networks research at Nokia Bell Labs.

Fibres have already been developed that are capable of withstanding higher optical launch powers before nonlinearities kick in. These so-called ‘large effective area fibres’ are most likely to be used for new submarine cables, where the higher cost of the specialty fibres is offset by the lower overall system cost in the harsh deep sea environment. The best candidates for more substantial increases in capacity, however, are dramatically different fibre types that integrate parallel transmission paths via a technique called space division multiplexing (SDM).

SDM doesn’t do anything that would invalidate the Shannon limit. ‘We are obviously still bound by the Shannon limit,’ said Winzer. ‘But by introducing multiple spatial paths, the Shannon limit of a single path no longer applies. Now you have n times the Shannon limit. So if you have three parallel paths, you have three times the Shannon limit in terms of capacity. That’s how space division multiplexing can overcome the Shannon limit on a single strand of fibre.’

Spatial multiplexing is already happening today, although at shorter reaches. ‘Within the data centre we already have parallel singlemode for short reach. You have PSM4 [transceivers], that’s a spatial multiplexed solution,’ Winzer pointed out. In metro and long-haul reaches Spatial multiplexing will start out as highly integrated systems deployed over independent singlemode fibre strands that are already in the ground, he believes. Integrating the components will generate important cost-per-bit savings, even if the fibres stay the same.

To exploit SDM to the greatest extent, however, new fibres will be needed. These fibres can contain multiple light-guiding cores running along their length. An alternative is to make a single larger core that guides light in several distinct patterns, called modes. These two designs can be combined by placing several cores closer together so that they become coupled, which gives the fibre designers more control over the modal dispersion properties of the fibre.

Light beams in the different modes or coupled cores will interact as they travel down the fibre, but they can be isolated from each other at the receiver by applying multiple-input, multiple output (MIMO) processing – a technique that is widely used in radio systems, but is at the very early stages of development in optical systems. In November last year, Winzer’s team at Nokia Bell Labs made a significant step towards commercialising the technology by decoding coherent optical signals in real time after they had travelled over a coupled-core three-core fibre, using a field-programmable gate array (FPGA) to perform 6x6 MIMO processing.

Researchers are also investigating hollow-core fibres, where structure of the fibre itself is used to guide light rather than the principle of total internal reflection. A hollow core or cores (plural) are surrounded by a micro-structured cladding region comprising numerous tiny air holes. Since most of the light travels inside the hollow core – in air – the nonlinear effects are virtually eliminated. In addition, light travels faster in air than it does in silica, which means such fibres could also reduce latency by about 30 per cent compared to standard solid-silica optical fibres.

Unfortunately, hollow-core fibre is fiendishly difficult to make because it contains thin struts, sometimes just a few nanometres wide, between the air holes. The longest uniform length of hollow-core fibre produced so far, by scientists at the Optoelectronics Research Centre at Southampton University in the UK, measured just 11km. Furthermore, losses are still relatively high due to surface scattering at the air interface, which also pushes the lowest loss window out to longer wavelengths at around 2000nm.

For now, higher capacities over new fibre types seems far away. To be successfully introduced they will require entire new ecosystems to support them, from technologies and components to installation tools and test equipment. ‘This development effort could take a decade, which is why researchers are getting started now,’ said Winzer. It is hard to imagine that standard singlemode fibre, as the workhorse of optical networks for the past several decades, could easily be replaced. But it’s good to know that when the time comes, scientists and engineers should be ready with solutions that can postpone the capacity crunch for a bit longer. 

Further reading:

Philosophical Transactions of the Royal Society: Discussion meeting issue ‘Communication networks beyond the capacity crunch’ organised and edited by Andrew Ellis, David Payne and David Saad 



In some industries, such as energy and electricity, the term ‘capacity crunch’ implies the impending exhaustion of available resources. This exact situation is not mirrored in the optical networking industry where the problem can be sidestepped – albeit without making any improvements to the overall cost per bit – by simply laying new cables. 

However, there could turn out to be a natural limit on the growth in the amount of data carried by the internet – the energy it consumes. Already information technologies consume around two per cent of the world’s energy resources, and that fraction could expand to as much as 20 per cent by 2030 as energy used by the internet outpaces the growth in global energy consumption.

Indeed, as industry finds new ways to enhance the energy efficiency of the internet, the increased efficiency boosts demand, leading to more resources being used – a phenomenon known as the Jevon’s paradox. Over the past decade – the period corresponding to widespread adoption of broadband technologies in industrialised societies – the amount of data handled by internet exchanges has grown exponentially.

Indeed, telecom regulator Ofcom reports that average household broadband data consumption in the UK has increased by a factor of five in just a few years, from 17GB in 2011 to 82GB in 2015. Annual global IP traffic will pass the zettabyte threshold by the end of 2016 (a zettabyte is the seventh power of 1000, or 1021 bytes), and is forecast to keep growing at a compound annual growth rate of 22 per cent over the next five years, according to Cisco’s Visual Networking Index. 

Although new applications may emerge that encourage consumers to enjoy even more bandwidth (such as virtual reality experiences that require 4K UHD resolution for each eye), even this type of growth could have natural limits. There is a finite, albeit growing, number of people on the planet and a finite number of hours in the day for them to interact with technology.

However, all bets are off when the Internet of Things takes hold, according to a discussion paper published this summer by researchers from Lancaster University in the UK. There are already more connected objects – such as smart meters, wearable devices, sensors for automation and tracking devices for logistics management – than there are people on the planet. Estimates vary widely, but the number of connected ‘things’ could reach more than 26 billion by 2020 according to Cisco.

That’s a problem, says Dr Mike Hazas, senior lecturer at Lancaster University’s School of Computing and Communications. ‘The nature of internet use is changing and forms of growth, such as the Internet of Things, are more disconnected from human activity and time-use. Communication with these devices occurs without observation, interaction and potentially without limit,’ he pointed out.

Commenting on the energy used by the internet, the researchers wrote in their paper: ‘It is intriguing to examine the claim that the energy used by the internet will continue to grow until the availability of energy itself becomes problematic. That is, unless some other kind of checks or limits to growth are imposed first. This is a rather radical, fascinating and, in so far as it is plausible, troubling claim.’

Should the available energy start to limit the use of communications networks, the situation would be transformed from the technological and business challenge that the industry faces today, to an environmental and ethical problem. Should society head off the capacity crunch by restricting access to resources, in other words, to bandwidth? Or will the technological difficulties in increasing capacity in a cost-effective way help to control demand by pushing up prices anyway? At this point however, nobody knows.

Mike Hazas, Janine Morley, Oliver Bates, Adrian Friday: ‘Are there limits to growth in data traffic?: on time use, data generation and speed’, in Proceedings of the 2nd Workshop on Computing within Limits (LIMITS ‘16), Irvine, CA, USA, 2016. ACM http://limits2016.org/papers/a14-hazas.pdf

Media Partners