NTT invents distributed machine learning for the edge

Japanese heavyweight NTT has come up with a way to carry out coordinated machine learning on multiple edge servers.

It is similar to a blockchain or an artificial neural network, in that it is essentially a consensus algorithm where a group of distributed servers – each one crunching data independently – share what they have ‘learned’ with one another to produce a single model.

So what that means is that NTT’s machine-learning application running on server A could be trained with one set of data – say images – while server B is trained with another set, but they both arrive at a single conclusion about how to interpret that data.

The algorithm is asynchronous, so it doesn’t rely on all the servers running the machine learning module at the same time, or being in constant contact with one another. This could have some interesting uses for telcos.

“In recent machine learning, especially deep learning, data are aggregated in a single place and a model is trained in the single place,” explained NTT, in a statement. “However, in the IoT era, where everything is connected to networks, aggregating vast amounts of data on the cloud is complicated.”

In addition, “more and more people are demanding that data be held on a local server/device due to privacy issues. Legal regulations have also been enacted to guarantee data privacy, including the EU’s General Data Protection Regulation (GDPR),” NTT said.

There is also of course the benefit of lower latency that comes with deploying a machine learning module at the edge of the network, closer to the end user. NTT said it will continue to research what it has dubbed ‘edge-consensus learning’ with the aim of making commercially available.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.