Sharpening the edges

Global internet traffic will grow by a volume of almost 50 per cent each year between now and 2015, according to Informa Telecoms & Media. This appetite for consumption will be whetted by online storage, peer to peer traffic and video consumption in the mobile space. The trick now is keeping the data deluge away from the core.

James Middleton

April 7, 2011

11 Min Read
Sharpening the edges

As mobile operators move to address shortcomings in their radio access networks, which are being relentlessly battered by the demands of smartphones and tablets, network congestion is moving ever closer to the core, where it becomes increasingly expensive to manage. With more rich applications than ever before, there is a shift taking place in the type of network traffic seen, causing operators to rethink their backhaul needs as they move to an all IP architecture and next generation infrastructure. According to the industry pundits spoken to for this feature, operators worldwide are rethinking their network topology, especially with regard to the design and implementation of the last mile.

For many service providers addressing this issue, the only way to ensure quality of experience is to densify the network by making more cell sites available. As anyone who attended February’s Mobile World Congress can attest, we are at the dawn of the age of the heterogeneous network, or ‘hetnet’, whereby meeting future capacity demands depends on a clearly defined small cell strategy.

telecoms.com looked at femtocell technology recently and came to the conclusion that small cells are getting bigger and vice versa—a phenomenon which raises some operational challenges. As Ben McCahill, director of mobile strategy at Tellabs puts it, the emergence of small cells and the shrinking size of cell sites means that operators are managing tens of thousands of node Bs on individual networks, a task which must be carried out regardless of vendor or technology type. “In fact, operators are now so focused on increasing network density we are moving to a model where our customers [the operators] say, ‘I want a base station delivered here and up and running in four hours or I’m not paying.’

They want a fast, scalable and efficient service. They just need to increase density and don’t want excuses about technology type,” he says. “Every time you roll a truck it costs money, so you’ve got to get it right on time, first time, every time,” he adds.

With network operators in a transition phase, hybrid networks with 2G, 3G and even 4G infrastructure, connected by different types of transport and additional data backhaul are the norm. It’s an operational nightmare, which is why IP is heralded as the saving grace and operators are moving to all IP networks faster than expected. But until the last 2G node is removed from the network—some time in the distant future—there will still be a complex mix of TDM, ATM, Ethernet and IP. In the meantime, prices for leased line and Ethernet continue to fall so economies continue to improve, yet the amount of data going through the network also continues to increase as does the level of resilience required.

The consensus is that the best strategy is to deal with the tidal wave of data moving towards the core before it even gets there, by keeping it as close to the network edge as possible.

Richard Lord, CTO of Altobridge, explains: “Typically, data from users’ handsets is encapsulated in protocols then encrypted and sent over the backhaul. So we take part of the core network functionality and push it out to the cell site. The encryption terminates at the remote site, then we strip off all the cellular protocols to expose the raw IP data, so now we can start implementing optimisation or even caching that same data in remote locations.” For Altobridge, which plays exclusively in emerging markets, this technique works well as a method of keeping operators’ costs down, as users in remote communities are often looking at the same content—perhaps an online public noticeboard—and caching this cuts down on transport costs.

Tellabs’ McCahill talks about the shift in operator mindset as a slow transition from network engineering to applications engineering that has become a common theme across the industry.

“Everyone’s talking about quality of experience for the user, not just in terms of ability but also response time in terms of individual applications and context and awareness of applications right out into the RAN,” he says. “You don’t want to bring the data all the way into the core then realise you should have performed some action on it further out in the access network.”

A similar theme has been noticed among the customer requests to Juniper Networks’ head of carrier Ethernet, David Noguer Bau, which he puts down to a changing focus from technical issues to business issues. “It comes down to the core network and application complexity,” says Noguer Bau. “Networks were designed so that all the traffic went to the data centre. But that introduces costs, and very little traffic actually needs to go to the data centre. So we sit next to the Mobile Packet Gateway, where the traffic is aggregated and turned into IP and we look at that traffic and decide whether it can go straight to the internet or if it has to go to the data centre. We can also analyse the data to see if there’s a way to optimise it there and then.” One by-product of this densification is the creation of more aggregation layers in the network, which will need to be fed by backhaul capacity. “It comes down to a question of how many aggregation layers you have,” says McCahill from Tellabs. “You could have one, two or maybe even three layers. But it all depends on the pricing of leased lines in a particular market.” And this is where the backhaul players earn their bread and butter.

According to McCahill, in the US it’s more cost effective to backhaul node Bs all the way back to the data centres. Yet in other markets its better to get off high-cost leased lines as soon as possible by terminating them at an aggregation point and using something far cheaper like Ethernet to take the data back into the core network. “From an economics point of view you terminate the expensive last mile as soon as possible. And from an operational point of view you can end up with a far more resilient network if you use a mesh topology, then if there is a fibre cut you can route around the problem and therefore your uptime is improved,” says McCahill.

It’s not just the growing number of small cells in a network that require more and more bridges to the core, but packet data today tends to be very uncorrelated and bursty, which means it is very difficult to dimension a single link for it, as traffic requirements are all over the place. So the consensus is for a dynamic rather than a dedicated system, so as to avoid ending up with lots of whitespace, which is wasted resource.

Carrier grade Ethernet is rapidly growing in popularity. Yes, networks are moving to all IP architectures, but that doesn’t mean you have to build an entire network using IP routers. Indeed, you only need it in certain places. Out at the edge the most cost effective and simple transport is Ethernet. In the eyes of Mervyn Kelly, EMEA marketing director at Ciena, it’s the unsung hero.

“Use IP at the centre of the network but don’t take it to edge as it’s too expensive,” he says. “You can build a network and deliver high quality service with Ethernet as a simpler way of building mobile backhaul. It’s cheap and reliable and can be an enabler for the rest of the network. Base station deployments are large and complicated so you want the backhaul to be as simple as possible. Ethernet can be installed by an unskilled installer and is selfconfiguring, which is of the utmost importance if you’re rolling out thousands of sites.”

Make no mistake that the prefix ‘carrier grade’ is also of the utmost importance. As Kelly says, “this is where we’ve added carrier grade attributes to Ethernet—all sorts of features to make it service- centric and manageable. We’re moving from SDH, which was deployed many years ago, to packet optical and WDM, as well as from 10Gb to 40Gb and 100Gb, all of which is a mostly carrier Ethernet-based transition.”

In terms of transport, a few years ago, the desire was to connect all cell sites with fibre. But demand has outstripped the time available to make such upgrades and, while in a developed country it may still be more cost effective to get traffic onto fibre as soon as possible, given market economics, an operator is not going to lay 1,000 miles of fibre to a small town in Congo. Which is why they’ll use whatever they can get their hands on, as long as it is reliable, packet-optimised, scalable, and capable of very high bandwidth—perhaps up to 1GB per base station.

There are plenty of options, and carriers are likely to use a good mix of all of them, depending on the market. We take a look at some of the front running technologies on the previous page, but as Ryanair has taught the world, it’s all about the operational efficiency and not just about capex and the latest technology. Bandwidth is increasing, yes, but it’s operational issues that are worth focusing on as the single biggest element in the budget. There’s a lot of network and traffic engineering going on, but as we move towards application engineering and look at how we treat applications in the network, well we’ve only just begun.

Backhaul: The options

Point to multipoint (P2MP) microwave

“Operators are currently using point to point architecture in the last mile. But there’s nothing dynamic about that. It’s very inefficient, plus every other part of the network is dynamic; the radio access and core can redistribute capacity as needed. But when they make that last connection they forget all that and go for a dedicated link. We argue that they need to change this thinking and go for a dynamic P2MP option.”

Lance Hiley, VP market strategy, CBNL

Wifi

“Fibre and copper are scarce, especially in urban environments. And you have to go a kilometre or two to get to the fibre point of presence. There’s talk of using LTE as backhaul but the problem there once again is spectrum availability. With microwave, if you have it deployed on corners with line of sight it’s ok but in the real world it’s probably in the middle of a block and lacking line of sight. Also, street furniture moves. Ten degrees is a narrow beam for wifi, but you’re looking at half a degree for microwave. This is why operators are talking to us about wifi. With engineering we can get wifi mesh up to 100Mbps. Then we can run it in junk license spectrum like 3.5GHz or 1.3GHZ or whatever the country has available.”

Steven Glapa, senior director of field marketing, Ruckus

Free space optics

“Traditional infrastructure such as TDM, E1, and microwave, is not efficient enough as backhaul requirements go up. You can’t keep stacking more radios on top of each other as, eventually, the model doesn’t work anymore. Our solution is to increase capacity without using additional spectrum, by using the optical not the RF portion of spectrum. There’s also no interference. Consider it a hybrid between fibre and microwave. Primary regions for deployment are urban areas as this is where capacity is needed, it’s of limited range so not really suitable for rural areas. However it’s cheaper in terms of cost per bit when you hit Gigabit requirements.”

Andrew Grieve, CEO, Fsona

40GHz microwave

“We leverage two frequency bands, 40.5-43.5GHz which was auctioned in 2007 in the UK to MBNL, UK Broadband and MLL , which makes it more favourable for a European climate. The places where we deploy are probably already fed by large capacity fibre. But fibre is expensive and traditional P2P microwave is expensive, so we use P2MP to light up multiple endpoints and provide enough scalable bandwidth—up to 10Gbps. The spectrum we use really appeals to the regulators, which know they have spectrum in this space but couldn’t do anything with it as they didn’t have any technology available. A licence for this spectrum is 100,000 times cheaper than a nationwide 3G licence.”

Shayan Sanyal, chief marketing officer, Bluwan

Satellite

“We focus on emerging markets and in particular the rural areas not connected by any other technology. Satellite bandwidth is expensive so we’re implementing data optimisation compression and DPI right out at the edge. We take local guys who install satellite systems and train them up on our systems, with the benefit that they know the local customs There’s a land grab for rural areas and revenues there are enough to pay for the equipment pretty quickly. If you’re the first operator into a remote community, you capture all high ARPU spend, all the business guys or those with some income, first. The ARPU in some of these remote sites is almost twice the national ARPU.”

Richard Lord, CTO, Altobridge

Read more about:

Discussion

About the Author

James Middleton

James Middleton is managing editor of telecoms.com | Follow him @telecomsjames

Subscribe and receive the latest news from the industry.
Join 56,000+ members. Yes it's completely free.

You May Also Like