if there is congestion in the LP queue it has no effect on the HP queue. On NCS5500 we can't use a policer on egress. Alternatively you can use a shaper. If you have a shaper configured and this specific queue needs low latency it is recommended to decrease the queue-limit to a very low value.
... View more
since IOS-XR 6.1.2 on NCS5500 we are using match on qos-group for egress (re)marking actions. For queue selection and corresponding actions we are using traffic-class.
If you change your class-maps to match on traffic-class the service-policy will be accepted. Please note you need to adapt your ingress policy accordingly to set traffic-class instead of qos-group.
If you need both, queuing actions and remark we allow to apply two service-policies on egress.
Please also note policing is not supported in egress direction. You can use a shaper instead. If you need low latency queuing in the shaped queue you can set queue-limit to a low value like 500us.
... View more
CRS-1 and CRS-3 short recap
The reader of the following article should be familiar with MQC and the corresponding CLI on CRS-1 and CRS-3.
The QoS implementation on CRS-X follows the SSE architecture on the LCs. QoS components on CRS-X are not shared between the CRS-1 and CRS-3 implementation. Although the config and CLI is alike. This includes the 3 components of MQC
as well as the majority of the debug and "show" commands.
On CRS-1 and CRS-3 QoS can be configured on the components as follows:
ingressq - service-policy input
Addresses hw queues on ingressq asic. Priority traffic will use the HP queue on the fabric and fabricq asic.
egressq - service-policy output
Addresses hw queues on egressq asic.
fabric-QoS - switch-fabric service-policy
Addresses hw queues on fabricq asic. Priority traffic will also use the HP queue on the fabric.
The packet buffer pool on ingressq an egressq is 1GB. This buffer pool is shard by all ports on the corresponding LC.
QoS on CRS-X
Because CRS-X introduces a new architecture there are some differences compared to CRS-1 and CRS-3.
On CRS-1 and 3 we have two sets of queues on ingressq ASIC, the shape queues and fabric queues.
On CRS-X we don't have the concept of ingress queuing. Thus, an ingress service-policy does not accept queuing related commands in any class like "bandwidth remaining percent", "random-detect" and "queue-limit".
On CRS-X ingress the shape queue is removed and we only have fabric queues. There are 6144 Fabric Queues on each MSC/FP-X. 3072 are UC HP, 3072 are UC LP queues. In addition there is 1 Multi-Cast Lo and 1 Multi-Cast Hi priority queue per (Fabric Interface Asic) FIA.
On the fabric stages S2 and S3 Unicast HP and LP, Multicast HP and LP queues are present as part of the Fabric Switch Element (SFE) ASIC.
When an ingress service-policy is applied containing a "priority" class priority queues on the ingress FIA on S2 and S3 as well as on the destination (egress) FIA are used. In case of multicast traffic it is placed on the corresponding HP multicast queue. An ingress service-policy can be deployed together with fabric QoS. Please note, fabric QoS will overwrite the priority class match of the ingress service-policy. Before 5.3.0 there is service impact of 7% on the ingress LC if ingress QoS and fabric QoS is deployed together.
Egressq packet buffer on CRS-X LCs
On CRS-X we have an egress packet buffer pool of 750MB per NPU (PAT). This pool is shared by the ports belonging to the NPU (slice). 30ms @ 200G per NPU is available.
Congestion on CRS-X LCs
Because the discard mechanism is the preferred behavior in case of congestion towards the NPU a discard shaper has been implemented on CRS-X LCs between FIA and NPU.
The discard mechanism causes traffic-drop at source at per-queue granularity instead of penalizing all the traffic coming to a fabricq destination.
Initially the discard shaper was implemeted with 110G. Since 5.1.3 it is 105G. If traffic destined to a e.g. 100GE egress port goes beyond 105G backpressure and discard drops will be observed. The corresponding fabric queue is building up.
Example outputs (110G is sent to the FIA):
RP/0/RP0/CPU0:CRS-X#show controllers fabricq queues instance 0 loc 1/4/cpu0
|Type/Ifname |Port| SWQ | HWQ | Q |P-quanta|Q-quanta| HighW | LowW |Q Len |BW |
| | num| num | num |pri| KBytes | KBytes | KBytes| KBytes|KBytes |(kbps) |
|HundredGigE1/4/0/3 | 1| 129| 153| BE| 18| 76| 37500| 33750| 27216|100000000|
RP/0/RP0/CPU0:CRS-X#sh controllers fabricq stat detail loc 1/4/CPU0
Location : 1/4/CPU0
Asic Instance : 1
Fabric Destination Address: 137
Last Statistic Cleared : Thu Jan 1 00:00:00 1970
Resource Total Used Free Free %
Free Queue Elements 2097136 1088308 1008828 48.1050%
Back Pressure Asserted Counters:
Global PGI to OBF BP : 2105294 (+ 0 )
LP UC BP : 137513279341146 (+ 3133386326 ) => 1.7G
HP UC BP : 0 (+ 0 )
LP MC BP : 137466162161097 (+ 3132319182 ) => 1.7G
HP MC BP : 0 (+ 0 )
PMI to OBF BP : 18917025 (+ 0 )
RQL to OBF BP : 2976 (+ 0 )
OBF to QMG BP : 190977342134364 (+ 4345434223 )
LP UC BP : 190977323210974 (+ 4345434223 ) => 2.3G
HP UC BP : 3335 (+ 0 )
LP MC BP : 822 (+ 0 )
HP MC BP : 822 (+ 0 )
CPU BP : 822 (+ 0 )
PMI BP : 18917589 (+ 0 )
RQL to QMG BP : 2976 (+ 0 )
PSM to PCL BP : 3221 (+ 0 )
QMG to PCL BP : 591003 (+ 0 )
On the ingress LC discard drops are incrementing:
RP/0/RP0/CPU0:CRS-X#show controllers ingressq statistics location 1/10/cpU0 | in discard
Wed Dec 7 14:27:51.064 UTC
discard drops : 190978667452 ( 9636990959268 bytes)
RP/0/RP0/CPU0:CRS-X#show controllers ingressq statistics location 1/10/cpU0 | in discard
Wed Dec 7 14:27:51.983 UTC
discard drops : 190978947130 ( 9637005222846 bytes)
If congestion in this direction is expected three options are available to avoid drops of priority traffic. The assumption is an egress service-policy has been configured. It won't cover traffic > 105Gbps.
(1) Fabric QoS
Here we have three queues available, priority, AF and BE. Fabric QoS is enabled globally and the corresponding service-policy applies system wide.
RP/0/RP0/CPU0:CRS-X(config)#switch-fabric service- policy <Name>
(2) Ingress QoS
Has to be configured on all ingress interfaces.
(3) Ingress QoS AND fabric QoS.
It is important to remember that if an Ingress QoS policy is applied to an interface and the Fabric QoS policy has been to the router, then the Ingress MSC/FP’s RX PSE will be required to perform two classification cycles.
This will have an impact on the forwarding capacity of the linecard. It is about 20% on CRS-3 and CRS-1 based LCs.
Ingress QoS and fabric QoS should be in sync and should complement each other.
... View more
Overview Power management support was introduced for the CRS platform from IOS-XR version 4.3.0 onward. The implementation includes the tracking of the available power to the chassis and the power consumed as well as chassis power zone monitoring for fixed (TDI) and modular (Arctic) power systems. It is limited to a static checking against known worst case numbers and logging warnings and alarms. It also implements a user interface (CLI) to display the worst case power consumption and the power availability. Powermanagement is available for all LCs (MSC and PLIM) on CRS-1 and CRS-3. It covers the 4,8 and 16 slot chassis including RPs, Swich Fabric Cards, Alarm Modules, Fan Tays and Fan Controller. On the fixed (TDI) power system the monitoring is based per power zone. Scope and Limitations Power management is based on a static worst case power table and slot power data. For the power usage calculations, always worst case (full load on all modules) power consumption data from the internal tables are assumed. It can’t be configured. The software will only issue warnings. Modules are allowed to function even if they exceed the calculated chassis power availability. When total power consumption is calculated, the power consumption of all mandatory cards like RPs, Switch Fabric Cards, Fan Trays, Fan Controllers and Alarm Modules, will always be added to the total chassis power usage. Independent if they are physically present. This prevents exceeding the power budget if redundant mandatory modules are added. SPAs ,XFPs etc. are also icluded in the calculated power consumption, independent if they are inserted. For the modular (Arctic) power system, the total power availability is the sum of all power modules in both shelves minus one. This one power module is reserved to cover a single module failure. The software will issue a separately worded warning and alarm when the power consumption crosses this threshold. Alarms A major alarm is generated from shelf_mgr process if the power consuption is deemed to exceed the available power budget. The alpha display is set accordingly: Modular (Arctic) Power-system: PWR CRITICAL Fixed (TDI) Power-system: ZONEX PWR CRITCL (X = power zone) A minor alarm is set when the redundancy threshold is crossed. The alpha display is set to PWR LOW . Syslog On systems with the fixed power system the following syslog message is issued when the worst case power budget is exceeded in a Zone: %PLATFORM-SHELFMGRV2-2-INSUFFICIENT_ZONE_POWER : Power allotted in zone 2 has exceeded the available zone power budget. Please check the 'show power' command to resolve this situation. On systems with the modular power system the following syslog message is issued when the worst case power budget is exceeded: %PLATFORM-SHELFMGRV2-2-INSUFFICIENT_RACK_POWER : Power allotted to cards in this rack has exceeded the available rack power budget. Please check the 'show power' command to resolve this situation. The following message is printed on a system with the modular power system when the worst case consumption exeeds the capacity to provide shelf redundancy: %PLATFORM-SHELFMGRV2-4-POWER_MODULE_REDUNDANCY_LOS T : Rack power is now being allotted from all power modules.Power module redundancy is no longer available, a single power module failure might result in card power loss. CLI RP/0/RP0/CPU0:CRS-X(admin)#show power ? allotted displays power consumption information capacity displays power capacity information summary displays a summary of the power information Examples: RP/0/RP0/CPU0:CRS-X(admin)#sh power allotted location 0/0/cpu0 Tue Sep 30 12:36:38.240 UTC nodeid = 0x2a000001 Node Card Type State PID RealTime WorstCase Power (W) Power(W) -------------------------------------------------- ----------------------------- 0/0/* MSC-X POWERED UP CRS-MSC-X 431.52 680.00 0/0/PL0 40-10GbE POWERED UP 40X10GE-WLO 56.09 100.00 RP/0/RP0/CPU0:CRS-X(admin)#show power allotted rack 0 Tue Sep 30 13:17:26.767 UTC Node Card Type State PID RealTime WorstCase Power (W) Power(W) -------------------------------------------------- ----------------------------- 0/0/* MSC-X POWERED UP CRS-MSC-X 431.52 680.00 0/0/PL0 40-10GbE POWERED UP 40X10GE-WLO 56.09 100.00 0/2/* FP-X POWERED UP CRS-FP-X 439.26 700.00 0/2/PL0 4-100GbE POWERED UP 4X100GE-LO 98.23 110.00 0/3/* MSC-140G POWERED UP CRS-MSC-140G * 450.00 0/3/PL0 6-10GE-WLO-FLEPOWERED UP * 175.00 0/4/* FP-X UNPOWERED CRS-FP-X * 60.00 0/7/* MSC-X POWERED UP CRS-MSC-X 431.02 680.00 0/7/PL0 40-10GbE POWERED UP 40X10GE-WLO 66.62 100.00 0/8/* MSC-140G POWERED UP CRS-MSC-140G * 450.00 0/8/PL0 6-10GE-WLO-FLEPOWERED UP * 175.00 0/14/* MSC-X POWERED UP CRS-MSC-X 443.33 680.00 0/14/PL0 4-100GbE POWERED UP 4X100GE-LO 76.25 110.00 0/RP0/* RP-X86v1 POWERED UP CRS-16-PRP-12G 171.05 225.00 0/RP1/* RP-X86v1 POWERED UP CRS-16-PRP-12G 172.45 225.00 0/SM0/* FC-400G/S POWERED UP CRS-16-FC400/S 78.57 131.00 0/SM1/* FC-400G/S POWERED UP CRS-16-FC400/S 76.25 131.00 0/SM2/* FC-400G/S POWERED UP CRS-16-FC400/S 79.38 131.00 0/SM3/* FC-400G/S POWERED UP CRS-16-FC400/S 76.76 131.00 0/SM4/* FC-400G/S POWERED UP CRS-16-FC400/S 77.26 131.00 0/SM5/* FC-400G/S POWERED UP CRS-16-FC400/S 76.45 131.00 0/SM6/* FC-400G/S POWERED UP CRS-16-FC400/S 76.60 131.00 0/SM7/* FC-400G/S POWERED UP CRS-16-FC400/S 80.20 131.00 0/FC0/* FAN-CT POWERED UP CRS-16-FAN-CT * 110.00 0/FC1/* FAN-CT POWERED UP CRS-16-FAN-CT * 110.00 0/AM0/* ALARM-B POWERED UP CRS-16-ALARM-B * 11.00 0/AM1/* ALARM-B POWERED UP CRS-16-ALARM-B * 11.00 0/FAN-TR0 FAN TRAY POWERED UP CRS-16-FANTRAY * 1215.00 0/FAN-TR1 FAN TRAY POWERED UP CRS-16-FANTRAY * 1215.00 NOTES: Real time power being consumed by the rack is 4089.096W & worst case power that can be consumed by rack is 8640.000W '*' Card doesn't support capturing real time power RP/0/RP0/CPU0:CRS-TDI(admin)#sh power summary rack 0 Location Power Capacity Power Allotted Power Available Power State ---------- ---------------- ---------------- ------------------------------- Rack 0: Zone 1: 2200.0W 2071.0W 129.0W OK Zone 2: 2200.0W 1391.0W 809.0W OK Zone 3: 2200.0W 2220.0W 0.0W INSUFFICIENT Zone 4: 2200.0W 2031.0W 169.0W OK Zone 5: 2200.0W 1391.0W 809.0W OK Zone 6: 2200.0W 2220.0W 0.0W INSUFFICIENT Zone 3 and 6 exceeds the worst case( full load on all modules) power capacity. But all modules will continue to work if the real power draw is below the available power capacity. If the the real power consumtion exceeds the available budget on a Zone the whole Zone is shut down. If the modules cannot be shifted accordingly to avoid exceeding the power capacity a migration to the modular power system has to be considered.
... View more
The nominal input voltage of the modular DC power-subsystem on NCS6k is -48 VDC or -60 VDC. It accepts an input tolerance in the range from –40 to –72 VDC. Alarms are generated if certain thresholds are crossed.
Minor: -44V Major: -42V Critical: -40.75V
A low battery alarm is generated if the threshold is crossed for 12 or more seconds.
The following example displays a major alarm message if the corresponding threshold is crossed:
0/RP0/ADMIN0:Feb 14 00:48:54.994 : envmon: %PKT_INFRA-FM-3-FAULT_MAJOR : ALARM_MAJOR :Power Module Warning (low input voltage) :DECLARE :0/PT0-PM2: Power tray 0 power module 2 is under DC_PEM_LOW_BATTERY_MAJOR condition(Input < 42V).
The output of show alarm displays the alarm condition on the system like in the following example:
sysadmin-vm:0_RP0# show alarm
Mon Jun 30 20:18:19.358 UTC
Location Severity Group Set time Description
0/PT0-PM2 major environ 06/30/14 19:16:14 (Input < 42V): Check Battery
The shutdown thresholds are as below:
SW shutdown: -40V HW shutdown: -39V
The following example displays the critical alarm message if the input voltage is below -40V on all power modules:
0/RP0/ADMIN0:Feb 14 00:48:53.432 : envmon: %PKT_INFRA-FM-2-FAULT_CRITICAL : ALARM_CRITICAL :Low input voltage chassis shutdown :DECLARE :0: CHASSIS SHUTDOWN due to all power modules in failed or input voltage < 40V condition.
0/RP0/ADMIN0:Feb 14 00:48:53.474 : shelf_mgr: %INFRA-SHELF_MGR-3-FAULT_ACTION_RACK_FORCED_SHUTDOWN : Forced shutdown requested for rack 0
... View more
The modular (arctic) power-subsystem accepts input DC power in the range from –40 to –72 VDC. Alarms are generated accompanied by a corresponding syslog message if certain thresholds are crossed. Minor: -44V Major: -42V The following example displays a minor alarm message if the "Minor" threshold is crossed: envmon: %PLATFORM-ENVMON-4-ALARM : MINOR alarm generated by voltage 43442mV since it is less than 44000mV on Power Shelf A - PEM A1-INPUT1 , check battery” The shutdown thresholds are as listed below: SW shutdown: -40.75V HW shutdown: -39.5V
... View more
Max. number of sessions per LC (includes single and multipath)
Minimum timer interval for multipath sessions
Minimum timer interval for single path sessions
Minimum BFD multiplier
PPS rate per LC for multipath sessions
PPS rate per LC for singlepath sessions
Max. number of sessions per system
Please note the minimum timer intervals are tested values. Although the configuration allows to configure a minimum interval of 15ms it is not recommended to deploy it because it may cause too many false positives.
BFD and QoS:
Locally originated BFD Control packets (in Asynchronous or Echo Mode) are generated by LC CPU and hence placed directly in the high-priority queue and transmitted with IPv4 Precedence set to 6 regardless if a QoS policy is present on the interface. Any QOS policy applied to the interface will be ignored with the packets being placed in the interface's default high priority queue. This behavior is different from other Locally Originated Control Packets, which are originated from RP and hence they are subject to the service policy configured on the egress interface.
From IOS XR 3.8 release onwards the ingress ASIC is able to identify transit BFD Echo packets and will mark them as 'vital' and place them into the high-priority queue without the need for a specific QOS policy / class-map statement in the ingress direction. The 'vital' bit setting ensures that under congestion, the packets are not dropped on either the ingress or egress Line Card.
If an ingress QOS policy is present on the interface on which BFD echo packets are received, then the BFD Echo packets are marked as vital packets and all QOS actions of the matching class except for tail drop and WRED are performed on the packets. The BFD Echo packets will still be placed in high priority queue, overriding the queue selected by the ingress QoS policy. In the egress direction, the BFD Echo packets are treated like other vital packets (LOCP) and are sent to the high priority queue of the interface.
If no QoS policy is configured, the vital packet will take high priority queue
If QoS policy is configured on the interface, and the policy has NO priority class, then vital packet will take high priority queue
If QoS policy is configured on the interface, and the policy has priority class, then the vital packet will take the queue as specified in the class it matches.
BFD on bundles:
On CRS BFD over Bundle (BoB) and BFD over logical Bundle (BLB) is supported. BLB was introduced in IOS-XR release 4.2.3.
While BoB has a BFD session on each bundle member BLB treats a bundle interface with all its members as one pipe. BLB is a multipath (MP) singlehop session.
If BLB is running on a bundle there is only one BFD session running. This implies that only one bundle member is being monitored by BFD, at any given time. This creates following limitations.
A failure of bundle members, which BFD is not running on is not detected.
A failure of a bundle member, which BFD is running on will cause BFD to declare a session failure on the bundle, even if there are sufficient numbers of other bundle members available and functional.
However, BoB does not provide true L3 check and is not supported on subinterfaces.
But it is possible to run BoB and BLB in parallel on the same bundle interface. This provides the faster bundle convergence from BoB and the true L3 check from BLB.
A configuration command is available, which allows to select which coexistence mode is used for BoB and BLB coexistence .
bfd bundle coexistence bob-blb [inherited|logical]
The command has to be configured in global mode.
When the “inherited” coexistence mode is configured then a BLB will always create a virtual session and never a BFD session with real packets.
When the option "logical" is used BLB will always create a real session even when BoB is on. There is one exception if the main bundle interface has an IPv4 address. In this case the session is inherited when BoB is on.
Please note a MP session requires the the following configuration under bfd:
multipath include location R/S/CPU0
Please note: BFD multipath packets are not sent to the explicit HP queue. The vital bit is not set. In case of a multipath session it is recommended to configure a corresponding policy-map to avoid drops of BFD packets in the case of congestion.
... View more
On CRS ROMMON is the piece of code which is validating and loading the mbi file or from where you can perform a Turboboot installation.
When the system or a module boots, ROMMON validates the mbi image version with the active RP. If the local mbi has the same version as on the active RP, it proceeds to boot the local mbi. Else it downloads the mbi via tftp from the active RP.
Rommon resides on SPI flash on each module with CPU. CRS modules with a CPU like MSC-B, FP-40, RPs, PRPs etc. maintain two copies of the ROMMON version, called ROMMON A and ROMMON B. ROMMON versioning is defined as MAJOR_VERSION.MINOR_VERSION.
When a module boots ROMMON A always runs first, and then it checks the compatibility with ROMMON B. If that passes, ROMMON A will handover the execution to ROMMON B. ROMMON B is now used from that point on to boot IOS-XR. Any module on CRS can have different versions of ROMMONA and B. But note, if the major version numbers are different ROMMON A and ROMMON B are not considered as compatible. In this case the module will run from Rommon A. This may lead to problems, especially if the deployed IOS-XR version requires a minimal supported version.
One example is if the IOS-XR version is ≥ 4.0 (px) a rommon version of ≥ 2.01 is mandatory. I.e. the major version of ROMMON A must not be < 2. This affects the PIE upgrade and Turboboot.
Beside others ROMMON contains the boot variable, which determines the image to boot. Options are available allowing to boot an alternative admin and sdr config.
If an alternative sdr config is saved on disk0 device it can be booted like the following example from the rommon prompt: boot bootflash:/disk0/hfr-os-mbi-4.3.2/mbihfr-rp.vm -a disk0:/alternative-config
Images larger than 360MB (≥IOS-XR 5.1.1) require rommon version ≥2.08 if the the image is installed via Turboboot. boot options are ignored to load an alternate config on CRS-PRP before version 2.06 Rommon versions with major version < 2 have a TFTF limitation of 256Mb.
ROMMON can be upgraded with the admin CLI command upgrade hw-module fpd rommon location <all|node>. In this case the module is upgraded to the bundled version of the running immage. To activate the new ROMMON version a reload of the module or system is required.
ROMMON can also be upgraded with an external file to a different version as the bundled IOS-XR version. As a general rule it is not required to upgrade ROMMON with the external files. But some cases are possible. One example is the IOS-XR upgrade from releases before 4.0 to releases ≥ 4.0 (px).
In this case the following procedure is applicable:
1) Download the tar file (rommon x.y .tar) from CCO, untar it on a PC and copy the binaries to disk0: or copy the tar file to disk0: and untar it with the corresponding ksh command.
# cd /disk0:
# tar -xvf rom2.07.tar
Tar: blocksize = 20
x rommon-hfr-ppc7450-sc-dsmp-A.bin, 393232 bytes, 769 tape blocks
x rommon-hfr-ppc7450-sc-dsmp-B.bin, 393232 bytes, 769 tape blocks
x rommon-hfr-ppc7455-asmp-A.bin, 938000 bytes, 1833 tape blocks
x rommon-hfr-ppc7455-asmp-B.bin, 938000 bytes, 1833 tape blocks
x rommon-hfr-ppc8255-sp-A.bin, 271744 bytes, 531 tape blocks
x rommon-hfr-ppc8255-sp-B.bin, 271324 bytes, 530 tape blocks
x rommon-hfr-ppc8347-sp-A.bin, 185344 bytes, 362 tape blocks
x rommon-hfr-ppc8347-sp-B.bin, 184448 bytes, 361 tape blocks
x rommon-hfr-x86e-kensho.bin, 4194304 bytes, 8192 tape blocks
x rommon-hfr-x86e-prp.bin, 4194304 bytes, 8192 tape blocks
x 20110407_release_notes.txt, 195 bytes, 1 tape blocks
2) Upgrade rommon B like the following example from admin mode:
RP/0/RP0/CPU0:CRS-C#admin Thu Jan 21 14:47:53.694 PST RP/0/RP0/CPU0:CRS-1(admin)#upgrade rommon b all disk0 Thu Jan 21 14:49:16.608 PST Please do not power cycle, reload the router or reset any nodes until all upgrades are completed. Please check the syslog to make sure that all nodes are upgraded successfully. If you need to perform multiple upgrades, please wait for current upgrade to be completed before proceeding to another upgrade. Failure to do so may render the cards under upgrade to be unusable.
RP/0/RP0/CPU0:Oct 13 14:00:06.596 : upgrade_daemon: Running rommon upgrade
RP/0/RP1/CPU0:Oct 13 14:00:06.600 : upgrade_daemon: Running rommon upgrade
SP/0/SM3/SP:Oct 13 14:00:06.657 : upgrade_daemon: Running rommon upgrade
3) Verify the success:
RP/0/RP0/CPU0:CRS-C(admin)#show logging | inc is programmed successfully
RP/0/RP0/CPU0:Oct 13 14:00:13.566 : rommon_burner: %PLATFORM-ROMMON_BURNER-5-progress : ROMMON B is programmed successfully.
RP/0/RP0/CPU0:Oct 13 14:00:13.523 : syslog_dev: upgrade_daemon: OK, ROMMON B is programmed successfully.
RP/0/RP0/CPU0:Oct 13 14:00:13.580 : syslog_dev: upgrade_daemon: OK, ROMMON B is programmed successfully.
4) Now you can upgrade rommon A with the following command from admin mode:
RP/0/RP0/CPU0:CRS-1(admin)#upgrade rommon a all disk0
5) Verify the success as in step 3
Since IOS-XR 3.8.4 FPD auto-upgrade is supported. If it is configured ROMMON B is upgraded automatically when a software upgrade takes place.
Note: FPD auto-upgrade is only working if the FPD PIE is isntalled.
To enable FPD auto-upgrade please configure and commit the following command in admin configuration mode:
Because each module is running from ROMMON B and ROMMON A is serving as backup if ROMMON B is corrupted it is usually not required to upgrade ROMMON B if the major numbers are the same.
Please Note: From release 6.3.1 onwards we may have different bundled rommon versions on the different types of modules like LCs or PRPs. In release 6.3.1 the bundled rommon version for PRP is 2.12, while the version for all other modules remains 2.11.
... View more
Hi Diego, to run 4.3.0 on CRS you need to upgrade rommonA and B with a version ≥ 2.01 beforehand. Please note we officially support a PIE upgrade N+2. I.e. A direct upgrade from 3.9.2 to 4.3.0 is not supported officially. It may work, though. The alternatives you have is either Turboboot or an interim step e.g. 3.9.2 to 4.1.2, then 4.1.2 to 4.3.0. But nevertheless you have to upgrade rommon first. Regards Frank
... View more
In IOS-XR, when igp sync delay is configured under mpls ldp it has no effect if LDP Graceful Restart is configured as well. If LDP GR is configured and the LDP session flaps, LDP-IGP Sync status remains "achieved" because the mpls forwarding is still functioning during the recovery time. Because the sync status is always "achieved" LDP doesn't delay the announcement.
... View more
The detailed output of "show bgp" may contain flags like import-candidate, imported and import suspect. The following document aims to explain these flags and when they are used. Example: RP/0/RSP0/CPU0:Router#sh bgp vrf foo 10.0.1.16 Mon Jun 28 15:02:55.072 JST BGP routing table entry for 10.0.1.16/32, Route Distinguisher: 9592:1 Versions: Process bRIB/RIB SendTblVer Speaker 3161 3161 Local Label: 16031 Last Modified: Jun 28 15:01:52.249 for 00:01:02 Paths: (4 available, best #3) Advertised to PE update-groups (with more than one peer): 0.2 Path #1: Received by speaker 0 65182 10.0.1.7 (metric 3) from 10.0.1.9 (10.0.1.7) Received Label 16026 Origin IGP, metric 0, localpref 100, valid, internal, import-candidate , imported , import suspect Extended community: SoO:9592:13 RT:9592:1 Originator: 10.0.1.7, Cluster list: 0.0.0.1 Path #2: Received by speaker 0 65182 10.0.1.7 (metric 3) from 10.0.1.10 (10.0.1.7) Received Label 16026 Origin IGP, metric 0, localpref 100, valid, internal, import-candidate, imported, import suspect Extended community: SoO:9592:13 RT:9592:1 Originator: 10.0.1.7, Cluster list: 0.0.0.1 Path #3: Received by speaker 0 65182 172.16.0.14 from 172.16.0.14 (10.0.1.14) Origin IGP, metric 0, localpref 100, valid, external, best, import-candidate, import suspect Extended community: SoO:9592:14 RT:9592:1 SoO:9592:14 Path #4: Received by speaker 0 65182, (received-only) 172.16.0.14 from 172.16.0.14 (10.0.1.14) Origin IGP, metric 0, localpref 100, valid, external, import suspect Import-Candidate IOS-XR has the concept of an "import-candidate" flag on each VPN/ VRF path. If it is set, the path can be imported to other VRFs. In BGP, during the bestpath selection process, the best bath is marked as "import-candidate". Other paths may marked as "import-candidate" if they satisfy the following conditions: If the Path is a VPN path and has the same local and remote RD, the path is always an "import-candidate". (2) is not considered. Apart from (1), paths satisfying all the below conditions will be "import-candidate": The path’s net should have a bestpath Selective multipath configuration should not deny this path Path should not be an accept-own self-originated path AS-Path Multipath relax is configured AND the number of AS-hops for the path equals that of bestpath AS-Path Multipath relax is not configured AND the AS-Path sequence for the path equals that of bestpath Path’s nexthop is not same as bestpath’s nexthop If we reach this place, then the path is marked as "import-candidate" Global routing table From IOS-XR version 4.3.1 onwards we we support importing default-VRF (global) IPv4/IPv6 unicast routes into VRFs. Therefore, the flag is also applied on BGP paths in the default (global) VRF. The flag is used in the VPN topology only when import is triggered. When a change in the table (version) takes place, the import thread would do a versioned work, touch each (source) prefix that has changed, check if anything else has changed, prepare an RT-set that combines RTs of all the "import-candidate" paths, and visit all VRFs whose import RTs match this RT-set. For each such (destination) table, it would check its max-paths config. If, for instance, "max-path ibgp 2" config is present for a destination table, then both the paths for RD1:p/m would be imported to that table. Otherwise, only the bestpath would be imported. When the primary RR session goes down, the path will be deleted immediately and the second path will be promoted to be the bestpath. Import will then be triggered to replace the currently imported path with the new path for each destination table. This would not lead to any traffic loss. Import Suspected This flag is used for import dampening. If a path for a given net/prefix flaps for the first time, the path is marked as “import suspect”. If a second flap happens in the “import suspect” state, the net is skipped from import processing for 10 seconds. This is to reduce import churn. After the “timeout”, or if no further flaps are observed, the suspect states are cleared. A “flap” happens when a path changes from being imported to being not imported even if the attributes are not changed.
... View more
The output of the "admin show controllers switch <0-1> statistics location <node>" on a CRS-PRP displays port 6 is connected to Tunn MCU. Example: RP/0/RP0/CPU0:CRS-3#admin show controllers switch 1 statistics location 0/RP0/CPU0 Fri Aug 23 13:19:04.938 MEST Port Tx Frames Tx Errors Rx Frames Rx Errors Connects ----------------------------------------------------------------------------------------------------- 0 : 0 0 0 0 RP0 1 : 923074857 0 1031857878 0 RP1 2 : 422682332 0 105850585 0 F0 3 : 422736850 0 105905135 0 F1 4 : 422686740 0 105854663 0 F2 5 : 422994927 0 106162993 0 F3 6 : 334602025 0 245904 0 Tunn MCU 7 : 0 0 0 0 Unused [SNIP] MCU is a chip , which provides the console tunneling feature. It has a connection to the BCM switch and two console like 9.6K ports to the CPU. MCU captures console characters from the CPU complex and convert them into Ethernet frames which will be sent out on the control network.
... View more
6PE Overview The Cisco 6PE solution enables IPv6 domains to communicate with each other over an MPLS IPv4 core network. MP-BGP in the IPv4 network is used to exchange IPv6 reachability information along with a label for each IPv6 prefix announced. 6PE routers are dual-stack routers i.e. running IPv6 with the customers and IPv4 in the core. 6PE routers do the following: Participate in V4 IGP to establish internal reachability inside the MPLS cloud Participate in LDP for binding V4 labels Run MP-BGP4 (Multi-Protocol iBGP) to advertise v6 reachability and distribute V6 labels among them The labels can be distributed as follows: Per-Prefix label - The 6PE node distributes labels for each IPv6 prefix learnt from interfaces connected to CE routers Run IPv6 routing protocols (eBGP6, Static, ISIS v6, OSPF v3 , EIGRP v6, iBGP) with CE routers to advertise V6 reachability learnt from their peers over the MPLS cloud IGP protocol between CE to PE has to be redistributed into MP-iBGP for end to end reachability 6PE with iBGP between CE and PE 6PE with iBGP between PE and CE is supported since IOS-XR 4.0. Unlike IOS an important mandatory configuration command is required in IOS-XR. It is required to configure "ibgp policy out enforce-modifications" under router bgp <AS> to allow changing attributes on the RR for reflected routes. The following is serving as example: Configuration of PE-1(RR): router bgp 1 bgp router-id 192.168.0.2 ibgp policy out enforce-modifications address-family ipv4 unicast ! address-family ipv6 unicast allocate-label all ! neighbor 2001:12::2 remote-as 1 description iBGP peer to CE-A address-family ipv6 unicast route-policy pass-all in route-reflector-client route-policy pass-all out ! ! neighbor 192.168.0.1 remote-as 1 description iBGP peer to P2 update-source Loopback0 address-family ipv6 labeled-unicast route-policy pass-all in route-policy pass-all out next-hop-self ! ! ! Example prefix received from the RRC CE-A: RP/0/0/CPU0:PE-1#show bgp ipv6 labeled-unicast 65:26::1:0/112 Thu Oct 17 17:00:29.950 UTC BGP routing table entry for 65:26::1:0/112 Versions: Process bRIB/RIB SendTblVer Speaker 3 3 Local Label: 16005 Last Modified: Oct 17 15:35:29.659 for 01:25:00 Paths: (1 available, best #1) Advertised to peers (in unique update groups): 192.168.0.1 Path #1: Received by speaker 0 Advertised to peers (in unique update groups): 192.168.0.1 Local, (Received from a RR-client) 2001:12::2 from 2001:12::2 (192.168.0.4) Origin IGP, metric 0, localpref 100, valid, internal, best, group-best, import-candidate Received Path ID 0, Local Path ID 1, version 3 RP/0/0/CPU0:PE-2#sh bgp ipv6 labeled-unicast 65:26::1:0/112 Thu Oct 17 16:57:18.393 UTC BGP routing table entry for 65:26::1:0/112 Versions: Process bRIB/RIB SendTblVer Speaker 3 3 Last Modified: Oct 17 16:36:20.733 for 00:20:57 Paths: (1 available, best #1) Advertised to peers (in unique update groups): 2001:10::2 Path #1: Received by speaker 0 Advertised to peers (in unique update groups): 2001:10::2 Local 192.168.0.2 (metric 1) from 192.168.0.2 (192.168.0.4) Received Label 16005 Origin IGP, metric 0, localpref 100, valid, internal, best, group-best, import-candidate Received Path ID 0, Local Path ID 1, version 3 Originator: 192.168.0.4, Cluster list: 192.168.0.2 RP/0/0/CPU0:CE-B#sh bgp ipv6 unicast 65:26::1:0/112 Thu Oct 17 17:28:51.253 UTC BGP routing table entry for 65:26::1:0/112 Versions: Process bRIB/RIB SendTblVer Speaker 3 3 Last Modified: Oct 17 17:28:45.753 for 00:00:05 Paths: (1 available, best #1) Not advertised to any peer Path #1: Received by speaker 0 Not advertised to any peer Local 192.168.0.2 from 2001:10::1 (192.168.0.4) Origin IGP, metric 0, localpref 100, valid, internal, best, group-best, import-candidate Received Path ID 0, Local Path ID 1, version 3 Originator: 192.168.0.4, Cluster list: 192.168.0.1, 192.168.0.2 RP/0/0/CPU0:RR-client-B#show route ipv6 Thu Oct 17 17:29:15.082 UTC Codes: C - connected, S - static, R - RIP, B - BGP, (>) - Diversion path D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP i - ISIS, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, su - IS-IS summary null, * - candidate default U - per-user static route, o - ODR, L - local, G - DAGR A - access/subscriber, a - Application route, - FRR Backup path Gateway of last resort is not set B 65:26::1:0/112 [200/0] v ia ::ffff:192.168.0.2 (nexthop in vrf default) , 00:00:28 C 2001:10::/64 is directly connected, 00:36:38, GigabitEthernet0/0/0/0 L 2001:10::2/128 is directly connected, 00:36:38, GigabitEthernet0/0/0/0
... View more