cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6470
Views
0
Helpful
25
Replies

ISM-100 to VSM-500 Migration and Redundancy managment guide with key performance differances

Nitin Pabbi
Cisco Employee
Cisco Employee

Contents

  1. ISM-100 to VSM-500 Migration guide
  2. VSM-500 Redundancy management config inputs
  3. ISM-100 to VSM-500 key performance and scalability points

Graceful insertion of VSM card on ASR9000 routers

Part-A: VSM card readiness

 

  1. Pre-Installation Information
    1. VSM-500 card supported on ASR9904,ASR9006,ASR9010,ASR9912,ASR9922 chassis
    2. This card supported from IOS-XR 5.1.1 release
    3. VSM-500 is supported with RSP440 / RP2/RSP880
    4. VSM-500 supports CGN and TMS-Arbor services

 

  1. Prerequisites files for VSM operations are :

 

VSM Services Infra Pie

asr9k-services-infra.pie

CGv6 Services pie

asr9k-services-px.pie

FPD PIE

asr9k-fpd-px.pie

VSM OVA Package

asr9k-vsm-cgv6-<version>.ova

Mandatory SMU(s) for target release

asr9k-CSCxxxxx-px.pie

 

 

Part-B: installation with service configuration

 

  1. Make sure all above mentioned PIEs are installed.

Check with Show install summary

  1. Make sure FPD is correctly upgraded

Check with sh hw-module fpd location all

  1. Install mandatory SMUs

Note: - To identify the bare minimum mandatory SMUs we need for VSM in key releases using CSM, Please refer below link.

https://supportforums.cisco.com/document/12554686/smu-list-through-cisco-software-manager-csm-recommended-smu-list-using-csm

 

  1. Install & activation of CGv6.ova for CGN services:-

 

  1. Once VSM card in IOS-XR RUN state then we need to perform upgrade activity.

 

Note: - Pre-533 IOS XR code, disabling CoPP TFTP config was required. From IOS-XR 5.3.3 no TFTP config changes required.

 

RP/0/RSP0/CPU0:ESC_CGN(config)#show run

Building configuration...

control-plane

 management-plane

  out-of-band

   interface all

    allow TFTP

end

 

RP/0/RSP0/CPU0:ESC_CGN(config)#commit

 

  1. Installation VSM Ova package with CGv6 config: -

 

Step 1: Copy cgn.ova file to RSP (E.g. to disk0:)

Step 2: Enable virtual service.

 

RP/0/RSP0/CPU0:Esc_CGN(config)#virtual-service enable

RP/0/RSP0/CPU0:Esc_CGN(config)#commit

 

Step 3: Install CGN OVA using CLI from execution mode:

 

RP/0/RSP0/CPU0:Esc_CGN# virtual-service install name cgn123 package disk0:/asr9k-vsm-cgv6-<version>.ova node 0/1/CPU0

Please note: Installation will take about 6-8 minutes.

 

Step 4: Check installation progress using show virtual-service list CLI. When the status is shown as "Installed", installation is completed.

 

RP/0/RSP0/CPU0:Esc_CGN#sh virtual-service list

 

Mon May 17 18:01:38.310 UTC

 

Virtual Service List:

 

Name                     Status             Package Name

__________________________________________________​_______

 

cgn123                   Installing         asr9k-vsm-cgv6-<version>.ova

 

RP/0/RSP0/CPU0:Esc_CGN#sh virtual-service list

Mon May 17 18:10:17.291 UTC

Virtual Service List:

Name                     Status             Package Name

__________________________________________________​_______

 

cgn123                   Installed           asr9k-vsm-cgv6-<version>.ova

 

Activate CGv6 VM

 

Step 1: Configure CGv6 VM, 10-GE interfaces and Activate

 

RP/0/RSP0/CPU0:Esc_CGN#conf t

Mon May17 18:10:25.525 UTC

 

RP/0/RSP0/CPU0:Esc_CGN(config)#virtual-service cgn123

RP/0/RSP0/CPU0:Esc_CGN(config-virt-service)#vnic interface tenGigE 0/1/1/0

RP/0/RSP0/CPU0:Esc_CGN(config-virt-service)#vnic interface tenGigE 0/1/1/1

………

RP/0/RSP0/CPU0:Esc_CGN(config-virt-service)#vnic interface tenGigE 0/1/1/10

RP/0/RSP0/CPU0:Esc_CGN(config-virt-service)#vnic interface tenGigE 0/1/1/11

 

RP/0/RSP0/CPU0:Esc_CGN(config-virt-service)#commit

Mon May17 18:11:34.285 UTC

RP/0/RSP0/CPU0:Esc_CGN(config-virt-service)#activate

RP/0/RSP0/CPU0:Esc_CGN(config-virt-service)#commit

 

Step 2: Un-shut and assign description to VNIC interface

 

Bring up 10-GE interfaces on VSM

Must before configuring any ServiceInfra/App interfaces

 -------------------------------------------------------

Interface TenGigE 0/1/1/0

Description VSM VNIC 1 ===>>>

 No shut

Interface TenGigE 0/1/1/1

Description VSM VNIC 2

  No shut

.......

Interface TenGigE 0/1/1/11

Description VSM VNIC 12

  No shut

 

 (Note: Adding interface Description is best practice. This will help VNIC interfaces in Admin UP state [Post Installation/Config activity] once VSM          card reloaded due to any operational reasons. )

 

Step 3: Check VM status via show virtual-service list CLI till it shows as "Activated".

 

RP/0/RSP0/CPU0: Esc_CGN#sh virtual-service list

Mon May 2 18:12:23.863 UTC

Virtual Service List:

Name                     Status             Package Name

__________________________________________________​_______

 

cgn123                   Activated           asr9k-vsm-cgv6-<version>.ova

 

Please note: After activating the VM, it will take some 5 min for CGv6 Application processes to come up.

 

Step 4: Configure ServiceInfra interface.

 

RP/0/RSP0/CPU0:Esc_CGN#conf t

RP/0/RSP0/CPU0:Esc_CGN(config)# interface ServiceInfra 1

RP/0/RSP0/CPU0:Esc_CGN(config-int)# ipv4 address <IP> <mask>

RP/0/RSP0/CPU0:Esc_CGN(config-int)# service-location 0/1/CPU0

RP/0/RSP0/CPU0:Esc_CGN(config-int)# commit

 

  • After ServiceInfra interface is configured, CGN HA agent will start sending hello packets to the CGv6 Application processes with retry after every 150 seconds.
  • Once response for hello packets is received, configurations will be sent to CGv6 Application processes.

 

Step 5: Configure Ingress interfaces

 

interface TenGigE1/0/0/0

  Add it to Inside VRF

 vrf ivrf1

  Assign IP address to it

 

interface TenGigE1/0/0/1

 vrf ivrf2

  Assign IP address to it

 

Step 6: Configure Egress interfaces

 

interface TenGigE0/2/0/0

 vrf ovrf1

Add it to Inside VRF

 

 

interface TenGigE0/2/0/1

 vrf ovrf2

 Add it to Inside VRF

 

Step 6: Configure CGN and NAT44 Service parameters

 

Define CGN Instance

 

service cgn cgn1

-- Define location

 service-location preferred-active 0/1/CPU0

 - Define NAT44 Service Type and instance (one per VSM card)

 service-type nat44 nat1

  - Define per-user Port limit for NAT44 instance

  portlimit 250

  inside-vrf ivrf1

    - Define Public IP address Pool

    map outside-vrf ovrf1 address-pool 9.0.0.0/24

   - Define inside VRF

  inside-vrf ivrf2

   - Define Public IP address Pool

    map outside-vrf ovrf2 address-pool 9.1.0.0/24

 

Step 7: Configure Inside and Outside ServiceApp interfaces

 

Define Inside Service App interfaces

 

interface ServiceApp 1

-- Define VRF where it should belong to (optional)

 vrf ivrf1

  - Assign IPv4 address and netmask

  - Define CGN instance and service type in which it belongs to

  service cgn cgn1 service-type nat44

interface ServiceApp 3

 vrf ivrf2

  - Assign IPv4 address and netmask

  service cgn cgn1 service-type nat44

-- Define Outside Service App interfaces

 

interface ServiceApp 2

 vrf ovrf1

  - Assign IPv4 address and netmask

  service cgn cgn1 service-type nat44

interface ServiceApp 4

 vrf ovrf2

  - Assign IPv4 address and netmask

  service cgn cgn1 service-type nat44

 

Step 8: Configure Inside and Outside ServiceApp interfaces

 

Define inside Service App interfaces

 

interface ServiceApp 1

-- Define VRF where it should belong to (optional)

 vrf ivrf1

  - Assign IPv4 address and netmask

  - Define CGN instance and service type in which it belongs to

  service cgn cgn1 service-type nat44

interface ServiceApp 3

 vrf ivrf2

  - Assign IPv4 address and netmask

  service cgn cgn1 service-type nat44

-- Define Outside Service App interfaces

 

interface ServiceApp 2

 vrf ovrf1

- Assign IPv4 address and netmask

  service cgn cgn1 service-type nat44

interface ServiceApp 4

 vrf ovrf2

- Assign IPv4 address and netmask

  service cgn cgn1 service-type nat44

 

Step 9: Define static route for Inside-to-outside traffic

 

router static

-- Specify the Inside VRF

 vrf ivrf1

  - Define address family

  address-family ipv4 unicast

   - Re-direct all traffic to Inside Service App

   0.0.0.0/0 ServiceApp 1

router static

 vrf ivrf2

  address-family ipv4 unicast

   0.0.0.0/0 ServiceApp 3

 

Step 10: Define static route for Outside-to-inside traffic

 

router static

 vrf ovrf1

  address-family ipv4 unicast

  -Assign IP pool ServiceApp 2

router static

 vrf ovrf2

  address-family ipv4 unicast

            -Assign IP pool ServiceApp 4

 

Refer link for service Configuration

http://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/asr9k_r5-3/cg-nat/configuration/guide/b_cgnat_cg53xasr9k/b_cgnat_cg53xasr9k_chapter_0100.html

 

  1. Health checks / service verification of VSM by using below commands.
  2. show virtual-service list
  3. show services interfaces serviceinfra <>
  4. show interfaces serviceinfra <> accounting
  5. sh int tenGigE 0/<>/<>/* | i line protocol
  6. show services cgn vsm ha trace all location <>

 

Part-C: Traffic restoration on VSM

 

  1. Restore traffic.
  2. Verify the services once traffic is restored completely.
  3. Once traffic flow recorded fine on all LCs then diverts Test IP pools traffic on VSM card and verify the services.
    1. show interfaces serviceinfra <> accounting
    2. show services interfaces
    3. sh int tenGigE 0/<>/<>/* | i packets out
    4. show cgn nat44 nat1 statistics
    5. sh cgn nat44 <> inside-vrf <> counters

 

Redundancy options on VSM card.

 

  • HA requirement for CGv6 on VSM is same as that of CGv6 on ISM; there are some similarities as well as differences in HA architecture / implementation across these two platforms.
  • As both implementations use SVI as virtual interface architecture, both can use the service redundancy support that the infra-structure provides and can divert traffic during failover using the same. However, as CGv6 on ISM runs on a native (non-virtualized) platform whereas on VSM, it runs on a virtualized platform, there are differences in HA triggers as well as how we handle those triggers.

 

 

Note 1: - on VSM-500 card for redundancy enabling reload we don’t need to enable

  service-cgv6-ha location 0/1/CPU0 puntpath-test command. This command is only applicable    

 to ISM-100 card.

Note 2: The data/punt path failures will be detected even if the CLI is not enabled but the card won’t be

reloaded if any problem encounter between CGv6 App to CGv6 HA Agent or vice versa.

           

 

VSM on ASR9000 supports 1:1 warm standby redundancy.

 

  1. Warm-standby
  • Translation state is not synchronized between active and standby, all connections
    will be re-established on failover
  • Pros: simple to configure, a single map pool is used
  • Cons: only 1:1, one card stay unused in standby state.
  • Convergence time takes ~18 sec [Data-path HA packets take 5 x 3 =15 sec for detection and action of failure + ~3 sec for rebuilding translation DB]

 

  1. An alternative with ABF is available
  • ABF is used to divert Ingress traffic /Private side / Subscriber side IP Pools/traffic towards Public / Core / Network side.
  • Pros: offers more options like n:1 redundancy, converges very quickly
  • Cons: - CGN-App of VSM can’t be monitored. To fix this problem user can use ICMP IPSLA probes for ISM infra/Services state maintenance. Or,
  • Enable “datapath-test” knob for problem detection and action at CGv6-APP.
  • Convergence time takes:-
    1. ~1 Sec if VSM-infra [XR operations] get impacted.
    2. ~16 Sec  it takes if data-path test enable for failure detection CGV6-App
    3. ~4-6 Sec it takes if ICMP-IPSLA with NH tracking is enabled between ServiceApp and outgoing interface IP address

 

1:1 Warm-Standby Redundancy

 

  • Configuration

 

RP/0/RSP0/CPU0:Esc_CGN(config)#

service cgn HA_CGN

service-location preferred-active 0/1/CPU0 preferred-standby 0/2/CPU0 >>>>>

 

RP/0/RP0/CPU0:Esc_CGN#show services redundancy

Service type     Name                    Pref. Active        Pref. Standby     

--------------------------------------------------------------------------------

ServiceInfra     ServiceInfra1           0/1/CPU0 Active   

ServiceInfra     ServiceInfra2           0/2/CPU0 Active   

ServiceCgn       HA_CGN               0/2/CPU0 Standby    0/1/CPU0 Active   

 

For redundancy enabling reload for data path tests need to config below command on CGN router for VSM-500 card:-

RP/0/RSP1/CPU0:Esc_CGN#configure

RP/0/RSP1/CPU0:Esc_CGN(config)#service-cgv6-ha location 0/x/CPU0 datapath-test

RP/0/RSP1/CPU0:Esc_CGN(config)#commit

 

 

CGN n:1 Redundancy with ABF (NAT44)

 

  • Configuration

 

Note: - User needs to configure datapath tests knob to detect failure at CGv6 App to CGv6 HA Agent of VSM-500 card

 

service cgn ABF-cgn-HA1

 service-location preferred-active 0/0/CPU0

 service-type nat44 nat44-ABF

  inside-vrf Inside-1

   map address-pool 10.0.0.0/24

!

service cgn ABF-cgn-HA2

 service-location preferred-active 0/1/CPU0

 service-type nat44 nat44-ABF2

  inside-vrf Inside-2

   map address-pool 10.0.1.0/24

!

service cgn ABF-cgn-HA-backup

 service-location preferred-active 0/2/CPU0

 service-type nat44 nat44-ABF-backup

  inside-vrf iBackUp-1

   map address-pool 10.0.0.0/24

 inside-vrf iBackUp-2

   map address-pool 10.0.1.0/24

!

 

ipv4 access-list CGN-ABF-HA

 10 permit ipv4 9.1.0.0/24 any nexthop1 vrf Inside-1 ipv4 100.68.51.6 nexthop2 vrf iBackUp-1 ipv4 100.68.33.6

 20 permit ipv4 9.2.1.0/24 any nexthop1 vrf Inside-2 ipv4 100.68.151.6 nexthop2 vrf iBackUp-2 ipv4 100.68.33.6

 100 permit ipv4 any any

!

router static

 address-family ipv4 unicast

  110.1.0.0/16 100.1.1.2 description Ixia-i2o-Default

  151.0.0.0/24 ServiceApp2 1 description Static-Ixia-o2i-ABF

  151.0.1.0/24 ServiceApp4 1 description Static-Ixia-o2i-ABF

  151.0.0.0/24 ServiceApp6 2 description Floating-static-Ixia-o2i-ABF

  151.0.1.0/24 ServiceApp6 2 description Floating-static-Ixia-o2i-ABF

 

Performance / Scalability information between ISM and VSM card.

 

 

Per Blade Limits

ISM

VSM

NAT44 instances supported

1 per card

1 per card

End-point Dependent Filtering

Not Supported

Supported

TCP Sequence Check

Not Supported

Supported

CLI: map ip many-to-one

Supported

Not Supported

Number of service infra

1

1

Number of service app

244 (per system)

256 per card

IP pool supported

/16 to /30 (max 65535 addresses)

/16 to /27(max 65535 addresses)

Max Static Port forwarding

6K

6K

Max number of NAT users

1M

4M

FIB scale

512k

1M

Configuration CLIs

Same

Same

Uses SVI

Yes

Yes

Network Processor

No, handled by a dedicated process

Yes (Typhoon)

Egress FIB Lookup

Within CGv6 App

On Typhoon NPU

ServiceApp placement

Associated with Niantic port/VQI

Associated with NP ports / Niantic ports

# of CGv6 instances

8

48

Stateless protocols (in CGN card)

6rd, MAP-T/E

6rd, MAP-T/E

Inline support

Yes for SL protocols

With Typhoon & Tomahawk LC for    MAP-T/E & 6rd

 

 

 

25 Replies 25

mkhalil10
Spotlight
Spotlight

Hi Nitin

I have prepared my configuration

My concern is , where to apply the ABF and how many outside VRFs I have to configure?

 

vrf OUTSIDE

address-family ipv4 unicast

 

vrf INSIDE-1

address-family ipv4 unicast

 

vrf INSIDE-1-BACKUP

address-family ipv4 unicast

 

hw-module service cgn location 0/1/CPU0

hw-module service cgn location 0/2/CPU0

 

ipv4 access-list ISM_ABF

10 permit ipv4 192.168.199.0/24 any nexthop1 vrf INSIDE-1 ipv4 9.9.9.2 nexthop2 vrf INSIDE-1-BACKUP ipv4 19.19.19.2

30 permit ipv4 any any

 

interface GigabitEthernet0/0/0/9

vrf INSIDE-1

ipv4 address 192.168.199.1 255.255.255.0

ipv4 access-group ISM_ABF ingress hardware-count

 

interface GigabitEthernet0/0/0/10

vrf INSIDE-1-BACKUP

ipv4 address 192.168.199.1 255.255.255.0

shutdown

 

interface ServiceApp1

vrf INSIDE-1

ipv4 address 9.9.9.1 255.255.255.252

service cgn cgn1 service-type nat44

 

interface ServiceApp2

ipv4 address 10.10.10.1 255.255.255.252

service cgn cgn2 service-type nat44

 

interface ServiceApp3

vrf INSIDE-1-BACKUP

ipv4 address 19.19.19.1 255.255.255.252

service cgn cgn2 service-type nat44

 

interface ServiceApp4

ipv4 address 20.20.20.1 255.255.255.252

service cgn cgn2 service-type nat44

 

interface ServiceInfra1

ipv4 address 10.89.89.1 255.255.255.0

service-location 0/1/CPU0

 

interface ServiceInfra2

ipv4 address 10.93.93.1 255.255.255.0

service-location 0/2/CPU0

 

router static

address-family ipv4 unicast

  0.0.0.0/0 172.66.66.65

  85.159.218.160/27 ServiceApp4

  85.159.218.192/27 ServiceApp2

 

vrf INSIDE-1

  address-family ipv4 unicast

   0.0.0.0/0 ServiceApp1

 

vrf INSIDE-1-BACKUP

  address-family ipv4 unicast

   0.0.0.0/0 ServiceApp3

 

service cgn cgn1

service-location preferred-active 0/1/CPU0

service-type nat44 nat1

  portlimit 4096

  alg ActiveFTP

  inside-vrf INSIDE-1

   map outsideServiceApp ServiceApp2 address-pool 85.159.218.192/27

 

  protocol udp

   session initial timeout 240

   session active timeout 600

 

  protocol tcp

   session initial timeout 240

   session active timeout 600

 

  protocol icmp

   timeout 60

 

  refresh-direction Outbound

 

service cgn cgn2

service-location preferred-active 0/2/CPU0

service-type nat44 nat1

  portlimit 4096

  alg ActiveFTP

 

  inside-vrf INSIDE-1-BACKUP

   map outsideServiceApp ServiceApp4 address-pool 85.159.218.160/27

 

  protocol udp

   session initial timeout 240

   session active timeout 600

 

  protocol tcp

   session initial timeout 240

   session active timeout 600

 

  protocol icmp

   timeout 60

 

  refresh-direction Outbound

Hey mkhalil,

My concern is , where to apply the ABF and how many outside VRFs I have to configure?

>> Few pointers :-

  • Inside VRF must be non-default
  • Outside VRF is optional, we can use the Default or user created VRF

You dont below commands to configure for VSM

hw-module service cgn location 0/1/CPU0
hw-module service cgn location 0/2/CPU0

You need to apply ABF at default table [ that's the way it works],

What all you need is mapping :-

ipv4 access-list <ACL name>

10 permit ipv4 <access network pool> any nexthop1 vrf <Inside VRF1 map to VSM1> ipv4 <address> nexthop2 vrf <Inside VRF2 map to VSM2> ipv4 <address>

 this is for mapping of internal traffic to VRF for NAT.

Once NAT get done then traffic get redirect to MAPPED outside pool.

In your case is :-

inside-vrf INSIDE-1

   map outsideServiceApp ServiceApp2 address-pool 85.159.218.192/27

here i find the command is wrong and seems you have drafted the config on paper but didn;t applied, otherwise you would have got an error.

because you need to fix the syntax and here is that :-

map outside-vrf outside address-pool <>
=== Mapping to the VRF “outside” in public side

Hope these clues help you.

Thanks

Nitin Pabbi

 

Thanks Nitin for your support

I will try to get to a final answer with you

I can remove the outside VRF and just use the outside service app under the service cgn right ?

Why I cannot use the hw-module command , it's not mandatory ?

The inside interface can be part from the global routing table , so there is no need for the ingress interface to be part of the inside VRF?

Below is my final modified configuration:


vrf INSIDE-1
 address-family ipv4 unicast

vrf INSIDE-1-BACKUP
 address-family ipv4 unicast

hw-module service cgn location 0/1/CPU0
hw-module service cgn location 0/2/CPU0

ipv4 access-list ISM_ABF
 10 permit ipv4 192.168.199.0/24 any nexthop1 vrf INSIDE-1 ipv4 9.9.9.2 nexthop2 vrf INSIDE-1-BACKUP ipv4 19.19.19.2
 30 permit ipv4 any any

interface GigabitEthernet0/0/0/9
 description Inside_Traffic
 ipv4 address 192.168.199.1 255.255.255.0
 ipv4 access-group ISM_ABF ingress

interface ServiceApp1
 vrf INSIDE-1
 ipv4 address 9.9.9.1 255.255.255.252
 service cgn cgn1 service-type nat44

interface ServiceApp2
 ipv4 address 10.10.10.1 255.255.255.252
 service cgn cgn2 service-type nat44

interface ServiceApp3
 vrf INSIDE-1-BACKUP
 ipv4 address 19.19.19.1 255.255.255.252
 service cgn cgn2 service-type nat44

interface ServiceApp4
 ipv4 address 20.20.20.1 255.255.255.252
 service cgn cgn2 service-type nat44

interface ServiceInfra1
 ipv4 address 10.89.89.1 255.255.255.0
 service-location 0/1/CPU0

interface ServiceInfra2
 ipv4 address 10.93.93.1 255.255.255.0
 service-location 0/2/CPU0

router static
 address-family ipv4 unicast
  0.0.0.0/0 172.66.66.65
  85.159.218.160/27 ServiceApp4
  85.159.218.192/27 ServiceApp2

 vrf INSIDE-1
  address-family ipv4 unicast
   0.0.0.0/0 ServiceApp1

 vrf INSIDE-1-BACKUP
  address-family ipv4 unicast
   0.0.0.0/0 ServiceApp3

service cgn cgn1
 service-location preferred-active 0/1/CPU0
 service-type nat44 nat1
  portlimit 4096
  alg ActiveFTP
  inside-vrf INSIDE-1
   map outsideServiceApp ServiceApp2 address-pool 85.159.218.192/27

  protocol udp
   session initial timeout 240
   session active timeout 600

  protocol tcp
   session initial timeout 240
   session active timeout 600

  protocol icmp
   timeout 60

  refresh-direction Outbound

service cgn cgn2
 service-location preferred-active 0/2/CPU0
 service-type nat44 nat1
  portlimit 4096
  alg ActiveFTP

  inside-vrf INSIDE-1-BACKUP
   map outsideServiceApp ServiceApp4 address-pool 85.159.218.160/27

  protocol udp
   session initial timeout 240
   session active timeout 600

  protocol tcp
   session initial timeout 240
   session active timeout 600

  protocol icmp
   timeout 60

  refresh-direction Outbound

Thanks a lot in advance

BR,

Mohammad

Hey Mohammad,

I can remove the outside VRF and just use the outside service app under the service cgn right ?

   map address-pool <>
>> Mapping to the default VRF in public side

Above mentioned procedure help you to configure outside Pool without using OUTSIDE VRF.

Why I cannot use the hw-module command , it's not mandatory ?

>> This CLI is for ISM card to initialize CGN services. Where as on VSM card with CGN OVA, user enable CGN services hence dont need extra CLI for the same.


The inside interface can be part from the global routing table , so there is no need for the ingress interface to be part of the inside VRF?

>>> yes Inside interface can be a part of default table.

Config structure looks fine. Please try this at test router before deploying it at production.

Thanks

Nitin Pabbi

Regarding the ABF redundancy , I have tested the configuration below yesterday , please find my comments and appreciate your help:

vrf INSIDE-1
 address-family ipv4 unicast

vrf INSIDE-1-BACKUP
 address-family ipv4 unicast

hw-module service cgn location 0/1/CPU0
hw-module service cgn location 0/2/CPU0

ipv4 access-list ISM_ABF
 10 permit ipv4 192.168.199.0/24 any nexthop1 vrf INSIDE-1 ipv4 9.9.9.2 nexthop2 vrf INSIDE-1-BACKUP ipv4 19.19.19.2
 20 permit ipv4 any any

interface GigabitEthernet0/0/0/9
 description Inside_Traffic
 ipv4 address 192.168.199.1 255.255.255.0
 ipv4 access-group ISM_ABF ingress

interface ServiceApp1
 vrf INSIDE-1
 ipv4 address 9.9.9.1 255.255.255.252
 service cgn cgn1 service-type nat44

interface ServiceApp2
 ipv4 address 10.10.10.1 255.255.255.252
 service cgn cgn1 service-type nat44

interface ServiceApp3
 vrf INSIDE-1-BACKUP
 ipv4 address 19.19.19.1 255.255.255.252
 service cgn cgn2 service-type nat44

interface ServiceApp4
 ipv4 address 20.20.20.1 255.255.255.252
 service cgn cgn2 service-type nat44

interface ServiceInfra1
 ipv4 address 10.89.89.1 255.255.255.0
 service-location 0/1/CPU0

interface ServiceInfra2
 ipv4 address 10.93.93.1 255.255.255.0
 service-location 0/2/CPU0

router static
 address-family ipv4 unicast
  0.0.0.0/0 172.66.66.65
  85.159.218.160/27 ServiceApp4
  85.159.218.192/27 ServiceApp2

 vrf INSIDE-1
  address-family ipv4 unicast
   0.0.0.0/0 ServiceApp1

 vrf INSIDE-1-BACKUP
  address-family ipv4 unicast
   0.0.0.0/0 ServiceApp3

service cgn cgn1
 service-location preferred-active 0/1/CPU0
 service-type nat44 nat1
  portlimit 4096
  alg ActiveFTP
  inside-vrf INSIDE-1
   map outsideServiceApp ServiceApp2 address-pool 85.159.218.192/27

  protocol udp
   session initial timeout 240
   session active timeout 600

  protocol tcp
   session initial timeout 240
   session active timeout 600

  protocol icmp
   timeout 60

  refresh-direction Outbound

service cgn cgn2
 service-location preferred-active 0/2/CPU0
 service-type nat44 nat2
  portlimit 4096
  alg ActiveFTP

  inside-vrf INSIDE-1-BACKUP
   map outsideServiceApp ServiceApp4 address-pool 85.159.218.160/27

  protocol udp
   session initial timeout 240
   session active timeout 600

  protocol tcp
   session initial timeout 240
   session active timeout 600

  protocol icmp
   timeout 60

  refresh-direction Outbound

The G0/0/0/9 interface is where the host is connected (incoming traffic) , I have removed the VRF binding from it and kept it in the global routing table but it did not work
When I bind the interface to VRF INSIDE-1 , it works with ABF applied
When I bind the interface to VRF INSIDE-1-BACKUP with ABF applied , it do not work , when I remove the ABF it works
As well , when the ABF applied , I can access the Internet , but there is no output in the show cgn nat44 nat1 statistics or show cgn nat44 nat2 statistics
What is wrong with the configuration I have done ? by the way , I have removed the VRF OUTSIDE because am using the outside serviceapp interface and the pool mapping under the service cgn configuration as you can see above

Hi Nitin

Hope this finds you well

I have tried to apply the ABF on G0/0/0/9 interface and it did not work , and I tried to leak the traffic from global routing table into the inside VRF , it did not work

router static

address-family ipv4 unicast

vrf INSIDE-1

address-family ipv4 unicast

192.168.199.0/24 Gig0/0/0/9

and it did not work , I have opened a case with Cisco but till now I got the confirmation it's working but it's not and am not aware what mistakes am doing

SR : 680746749 

BR,

Mohammad

Hi Nitin

I have managed to get the setup working as below:

router static

address-family ipv4 unicast

vrf INSIDE-1

address-family ipv4 unicast

192.168.199.0/24 vrf default Gig0/0/0/9

Thanks for your support

BR,

Mohammad

Good to hear that Mohammad. Great work.

Thanks for an update

Nitin Pabbi

Thanks Nitin

I need your help in something please , I have in the HQ two ASR9K equipped each with VSM , they are operating as HSRP active/standby (so , if one of these nodes fails , the traffic is redirected to the other node by means of HSRP)

Now , the two ISM cards installed on the ASR9K in our case is located in a different geographic location (i.e. it’s considered as a DR site)

What should I do in order to redirect all the traffic handled by the VSM modules @ HQ to the DR site in case of site failure?

Is what so called extra-chassis redundancy what am looking for?

Thanks again

BR,

Mohammad 

Hey Mohammad,

What should I do in order to redirect all the traffic handled by the VSM modules @ HQ to the DR site in case of site failure?

>> This is something tricky and you need to scale before design failover redundancy from VSM to ISM. Because VSM has more throughput capacity then ISM.

Now to achieve this objective you need to play with routing. You need to be cautious for ingress traffic redirection which is from access network towards CGN node.

For egress traffic e-BGP sessions and its attribute will help you to redirect the traffic.

If you have scalable internal infrastructure which can help you to re-route the traffic between distantly connected nodes then you can achieve this objective.

Is what so called extra-chassis redundancy what am looking for?

>>> There are 2 types of redundancy on CGN over VSM:-

a) Warm redundancy  

b) Redundancy using ABF. While using ABF you can achieve traffic failover requirement for locally connected or distantly connected nodes.

for distantly connected nodes traffic redundancy we use word called "Geo-redundancy"

HTH

Thanks

Nitin Pabbi 

Hi Nitin and thanks for your continuous support

Do you have an reference for Geo-redundancy or configuration examples that can assist me with this?

I have read in some presentation (extra-chassis redundancy and I have to build GRE tunnels to avoid routing loops)

Thanks again

BR,

Mohammad

Hey Mohammad,

There is no specific config for Geo-redundancy but you just need to play with ABF which does lookup at local fib for next hop accessibility for traffic forwarding.

if the distant node [which you want to use as standby] accessible via same access network from which primary CGN router connected then its possible in same way as you configure ABF redundancy between 2 nodes under same premises. 

You don't need to use GRE tunnel here. Common Ipv4 infrastructure needed for traffic forwarding

Thanks

Nitin Pabbi

Hi Nitin

I have a quick question regarding public pools

If I configured multiple /24 public pools for my nat44 service , how the router will assign addresses ? random or does sequence plays role?

Hey Mohammed,

This would be random assignment because of mapping of APPS with NP and virtual cores.

Thanks

Nitin Pabbi