Regardless of the network, the consequences of a signaling storm event can be very devastating in terms of continuity and quality of the service to be delivered, because the same “structure” that is supposed to support your service, goes mad and attacks vital network functions, disabling your nodes with huge amounts of messages, consuming network resources like bandwidth, spectrum, CPU process capacity, among others; troubleshooting for these events is rarely trivial and network recovery times tend to be long after such events. For cellular networks, like every other network that carries a service this is an evident risk, and the industry has made an effort to present this risk usually in a very polarized way, normally splitting the problem between the RAN and the CORE, even sometimes declaring that the increase in signaling is good when you have it at the CORE and bad if you have it in the RAN, this simplification is good because it makes it easy to approach the problem, but it’s not accurate. Neither is accurate to centralize the blame on just one cause; probably you have heard that the main reason for signaling storm events is the massive adoption of smartphones, well this is mainly true, but leaves out several other causes that make the problem worse. So the intention in this blog entry is to try and widen the perspective on signaling storm events, and to present it in a different way.
Not just two sides
Signaling storms can be caused by many different reasons and cellular networks are exposed to the risk in different ways depending on a myriad of factors, like the current technology deployment status or future strategies of service offerings. Incredibly the status of the network current and future in terms of technology and service, play a key role determining the level of exposure the network has to the problem. Following I’ll mention just a few of the many sides:
Level of penetration of smartphones.
The predominant smartphone OS (even more important the previous one).
The status of deployment of new generation of RAN technology and the pace the MNO implements it.
The application of advance features in the CORE and RAN (or the lack of it).
The mods users apply on devices.
Level of popularity of certain apps.
Lack of optimization of the RAN network.
Obsolesce of CORE equipment in terms of CPU power and buffer capacity.
The presence of malicious SW and mobile virus.
The level of implementation of Quality models and differentiated service.
Early mesh implementations of Diameter failing to scale to meet the business needs.
So, I’m presenting an Emergency Pocket Guide
Given the current state of declining ARPU for cellular operators, and the pressure to monetize on the shift of mobile phones usage, is only normal to be critic about what preventions or measures must be taken to protect your network from the storm; as presented earlier, a two-sided approach is not wise, and the first step to be taken, as almost every emergency pocket guide advices, is to find out what’s your position in reference to the threat, what’s the level of exposure to the problem. But make no mistake; the threat is real and networks inevitably are getting more exposed to the problem, but from this to believe that one piece of equipment in the core or one feature on the RAN will cover all your problems lies a big road. So following the same spirit of an emergency pocket guide to the home to prepare against emergencies I present my version to prepare your network against the threat of the Signaling Storm.
Hi,Previous we were using Cisco 7606 as BRAS and the issue was Router keeps on rebooted automatically.OLD Details about Cisco 7606Router 7606 #show versionCisco IOS Software, c7600rsp72043_rp Software (c7600rsp72043_rp-ADVENTERPRISEK9-M), Version 15.2(2)S...
I have successfully registered my SIP endpoints (which we have developed) with Broadsoft sandbox environment. I can make basic calls without problems. When I try to conduct feature tests like Call Waiting (CW), Call Hold (CH), and Call Transfer (CT)...
I am trying to find if there is any Pros or Cons in deploying MPLS with separate customer AS for each site as opposed to one AS. The end goal is to be able to inject default routes from two DCs and be able to make a subset of sites follow one default rout...
So I have attempted the below config: l2vpn
backup disable never
xconnect group Xconns
In one of the discuss, it was initiated to us that there is a specific DB size for example:1 GB and breaching of which will led to crash of Iedge process.I want to replicate the scenario by creating load of user by tool in the lab and track the CDM db uti...