cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1357
Views
0
Helpful
6
Replies

ASR9K ISM bundled configuration

budilasmono
Level 1
Level 1

Hi all,

I got the problem in customer network. The requirement is :

1. to service private ip subscriber with more than 40Gbps traffic.

2. i only got 1 ASR9010 router with 4 ISM module

3. each ISM module handling 14Gbps NAT translation of my private subscriber.

What is the best recommendation configuration for this ?

Can we bundled all the service cng into 4 active ISM module ?

Can we create one big subscriber private ip on service app1 and one big public pool on serviceapp2 that will be served by ISM module 1, ISM module 2, ISM module 3, ISM module 4 ?

We doesn't want the pool/service app configured per assigned one ISM module, can it be done ?

Another question, if from serviceapp1 from subscriber private ip, there are public subscriber ip, can we exclude the subscriber public ip not getting translated on the ISM/serviceapp1 ?

Please help.

Thanks a lot,

Budi L

6 Replies 6

budilasmono
Level 1
Level 1

Hi All,

We want to ask about ISM scalability issue. The issue are :

1. Each ISM handling 14Gbps of NAT translation.

2. We want to install 6 ISM module to handle 80Gbps NAT traffic from subs.

3. We only have one big bundled interface on the ASR router to the subscriber.

the diagram :

subscriber --- (gateway router) --- (ASR NAT router) --- internet

each link is 80Gig traffic.

(The gateway router) send all 0.0.0.0/0 traffic to (ASR NAT router)

(The gateway router) have bundled-ether(8 TenGElink) interface to (ASR NAT router)

(The gateway router) doesn't have capability to sort/classify/choose which customer ip goes to which interface to internet because of 0.0.0.0/0 to (ASR NAT router)

What is the solution for this, so that (ASR NAT router) can :

1. Can utilize all the ISM prefered active module for all subs.

2. Can have only one big insidevrf assigned to bundle-ether (8 TenGE link). And this one big insidevrf applied to all 6 ISM module.

3. Can use the same insidevrf name for each of all servicecgn that assigned to each of all 6 ISM module.

4. Can use different insidevrf name for each of 6 ISM servicecgn. But the different insidevrf share the same private IP pool from bundle-ether, but different public map pool. (because gateway router only sending 0.0.0.0/0 to ASR NAT and cannot do which subs pool goes to which interface to ASR NAT using route-map/set next hop).

5. Can the ISM module be bundled in one servicecgn. And all NAT process is spreading accross 6 module, and from customer via gateway with default gateway without doing the ACL to specify source of customer pool go to specific interface to get associated with unique vrf that get assigned to specific which ISM doing the nat work. But instead one big bundled of 6 ISM to 1 ISM processing NAT.

Please help.

Thanks,

Budi L

Nicolas Fevrier
Cisco Employee
Cisco Employee

Hi Budi,

First, you'll have to define serviceApp interfaces for inside and outside of each ISM card (2 x 6 in your case).

Second, you can not configure a single unique map pool rang for all cards, you'll have to split your range in 6 different sub-ranges and assigned them to your cards.

----> You will be able to load-balance the inside-to-outside traffic with some tricks:

As you may know, there is a RFC recommendation stating that a given inside (source) address should always be paired to the same outside address (not "mandatory" but recommended).

Since the natural load-balancing of the router is done with various parameters including the destination address, it's always sure that the traffic from different ports of an inside address will be sent to different ISM cards and consequently translated with addresses of different pools, which is a violation of the recommendation described above.

To avoid this situation (which wouldn't dramatic, but some application "may" be affected), we recommend to make the assignation of the traffic to the different cards predictable and based on the source address. To achieve this, you'll have to rely on access-list based forwarding applied in the ingress of your bundle-ethernet interface.

The big job now will consist in identifying the source addresses and split them in different blocks. Based on these blocks you will define different next hop addresses.

Please note that this ABF approach can be extended and used for redundancy. For instance, you can define different next-hop (the ip addresses of your inside serviceApp interfaces), if the first is not reachable then you will send the traffic from this source to another address of another ISM card.

For the outside-to-inside traffic, since you have to define unique address pool for each ISM, it will be easy to define static routes pointing to the proper card for a given address.

Hope this helps,

N.

Hi Nicolas,

So in bundle-ether, we can configure in global vrf, and using ABF to set nexthop on 6 vrf to 6 ISM.

I'm affraid about the process load on the router. I mean, we only got the RSP4, and who is doing the ABF, is it ISM or RSP4 ?

Can the router doing the ABF for 6 ISM, and handling 80Gbps traffic in from subscriber ?

Thanks,

Budi L

Hi Budi,

you are right: different inside-VRFs and the ABF next-hop will point to the serviceApp interfaces in these inside-VRFs.

The ABF (like every feature applied on interface) will have the same impact than an ACL.

This performance impact of using ABF will be present at the Typhoon NPU ASICs of the line card, not the RSP440 nor the ISM board.

It's usually minimal and not a matter of concern.

More, the impact will be the same, regardless of the number of entries in your ACL.

The only important limitation to consider with ABF is that it only work on IPv4 traffic and not MPLS, so if the traffic is labeled when received on the interface, packets are not matched (and in that case some other tricks may be needed).

HTH,

N.

HI Nicolas,

Lets says, there will be 6-12 ACL statement / ABF to each of 6 inside vrf assigned to the ISM.

Does it has troughput impact on linecard MOD-80TR ? is the processor resource increase significantly / traffic throughput get decreased ?

Thanks a lot,

Budi L

Budi, ISM-100 needs 2 pair of ServiceApp for maximum throughput.

One pair of ServiceApp can process no more then 1,5 Mpps in each direction.

So, you need 12 VRF (24 ServiceApp) to achieve 90Gbps full-duplex on 6 ISM card with packet size 700 byte.

Also, for failover reason, access list in ABF must be created with multiple nexthops to VRFs belonging to different ISM-100 cards.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: