Under threat from companies not traditionally viewed as competitors, telcos are racing to both offer cloud services to end-user customers and tap into the cloud to run core network functions in a bid to make their value propositions more competitive and infrastructure more dynamic. But cloud experts from Orange, Swisscom and AT&T have each voiced concerns around just how capable the sector may be at managing such a bold transformation with existing tools.

Jonathan Brandon

November 4, 2014

3 Min Read
Telcos voice concerns about taking core networks to the cloud

Under threat from companies not traditionally viewed as competitors, telcos are racing to both offer cloud services to end-user customers and tap into the cloud to run core network functions in a bid to make their value propositions more competitive and infrastructure more dynamic. But cloud experts from Orange, Swisscom and AT&T have each voiced concerns around just how capable the sector may be at managing such a bold transformation with existing tools.

Speaking at the OpenStack Summit in Paris on Tuesday, experts from major European and US telecoms firms, all of which are experimenting with using OpenStack to stand up core network services or cloud services for end-users (or both), agreed that existing telecoms industry regulation can pose challenges when introducing NFV and cloud architectures to underpin them.

Toby Ford, AVP cloud technology, strategy and planning at AT&T, which has made some progress in recent months with its NFV experiments and in rolling out its cloud stack, said that the extent to which telcos in the US can virtualise networks is limited in some part by stringent regulations.

“The actual facilities we use have a lot of legal rules we have to apply, so we’re doing a lot of work to work around those as we go forward [with NFV],” he said.

This issue, particularly with respect to security, isn’t altogether dissimilar from what other highly regulated verticals are facing. The key question is how do you as a company demonstrate a virtual appliance is as reliable as physical hardware in terms of resilience, uptime, security, and so forth?

Markus Brunner, head of standardisation in the strategy and innovation department of Swisscom said guaranteeing quality of service, which in some contexts is legislated, is also a key challenge.

“It’s really about guarantees. We have a set of services which require guarantees – legally require certain guarantees,” he said.

He said virtualised network functions – of which there are preciously few in the market today – need to come with performance guarantees, which can be challenging given how telcos are currently used to managing their network or tapping vendors to manage it on their behalf.

“If we buy VNFs from a third party and put in on our cloud system, who is responsible if something doesn’t go well from a performance or security perspective?”

“The other issue is security. Telcos seem to be fairly secure at the moment, at least to a certain degree… and that’s another thing that is quite different [in a virtualised environment],” Brunner added.

Some also voiced doubts about the performance of virtualised network functions, and how.

Xiaolong Kong, R&D unit manager at Orange Labs, who helps coorindate the telco’s cloud research, said that virtualising telecoms applications today can often lead to performance issues and scalability limitations.

He said NFV requires that the platform underneath be as “carrier-grade” as the applications running on top.

Ford added that with OpenStack specifically, the underlying cloud platform supporting AT&T’s NFV experiments, there’s still much to be desired.

“Since we have a lot of sites with OpenStack, we really need help so that we can upgrade, and get rolling upgrades and continuous integration to happen for these sites. We anticipate deploying OpenStack in a large number of locations – not exactly a huge setup per location, but a lot of them, and that really requires that we get the lifecycle management down solid to not only deploy quickly but maintain it over time.”

Challenges around lifecycle management, and challenges with updating the platform, seem to be recurring themes discussed among users at the Summit in Paris, and are considered key limitations of vanilla OpenStack for the moment. But the issue seems particularly acute for telcos, which don’t typically deploy core infrastructure platforms nearly as fast as OpenStack distributions, each bringing pretty significant code refinements, are released (every six months).

About the Author(s)

Jonathan Brandon

Jonathan Brandon is editor of Business Cloud News where he covers anything and everything cloud. Follow him on Twitter at @jonathanbrandon.

You May Also Like