A group of researchers from Universitat Politècnica de Catalunya in Barcelona, alongside Huawei, have re-tooled an artificial intelligence technique used for chess and self-driving cars to improve efficiency in optical transport networks (OTNs).
OTNs require rules for how to divide the high amounts of traffic they manage. Producing these rules for making those split-second decisions can become very complex, and this new approach to the problem combines two machine learning techniques. The first, reinforcement learning, creates a virtual ‘agent’ that learns through trial and error the particulars of a system to optimise how resources are managed. The second, deep learning, uses neural networks to draw more abstract conclusions from each round of trial and error.
So far, the most advanced deep reinforcement learning algorithms have been able to optimise some resource allocation in OTNs, but have become stuck when they run into novel scenarios. The researchers worked to overcome this by varying the manner in which data is presented to the agent. After learning the OTNs through 5,000 rounds of simulations, the team found that the deep reinforcement learning agent directed traffic with 30 per cent greater efficiency than the current state-of-the-art algorithm.
One thing that surprised researcher, Albert Cabellos-Aparicio and his team was how easily the new approach was able to learn about the networks after starting out with a blank slate. ‘This means that without prior knowledge, a deep reinforcement learning agent can learn how to optimise a network autonomously, he said. ‘This results in optimisation strategies that outperform expert algorithms.’
With the enormous scale some optical transport networks already have, Cabellos-Aparicio believes that even small advances in efficiency can reap large returns in reduced latency and operational costs. The group plans to apply these strategies in combination with graph networks.
Cabellos-Aparicio and the team will share their findings at the upcoming OFC conference on Monday, 4 March.