05-24-2019 02:30 AM - edited 05-26-2019 05:26 PM
Hi experts,
my issue originated from a Cisco Prime alarm: "... Vlan18: Value of Input Utilization = 78% ..." and I believe the cause is the VLAN interface's "rxload 228/255":
6807v#show vlan id 18 VLAN Name Status Ports ---- -------------------------------- --------- ------------------------------- 18 Redacted active Te1/1/1, Te2/1/1 6807v#show interfaces status | include Te./1/1 Te1/1/1 Internet traffic t connected 18 full 10G 10Gbase-SR Te2/1/1 Internet traffic t disabled 18 full 10000 No Connector 6807v#show interfaces te1/1/1 TenGigabitEthernet1/1/1 is up, line protocol is up (connected) Hardware is C6k 10000Mb 802.3, address is 00a2.WWWW.YYYY (bia 00a2.WWWW.YYYY) MTU 1500 bytes, BW 10000000 Kbit/sec, DLY 10 usec, reliability 255/255, txload 3/255, rxload 22/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 10Gb/s, media type is 10Gbase-SR input flow-control is on, output flow-control is off Clock mode is auto ARP type: ARPA, ARP Timeout 04:00:00 Last input never, output never, output hang never Last clearing of "show interface" counters never Input queue: 0/2000/1/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 901192000 bits/sec, 91689 packets/sec 5 minute output rate 135272000 bits/sec, 51214 packets/sec 408294682540 packets input, 450317724195160 bytes, 0 no buffer Received 3085208 broadcasts (3083645 multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 0 multicast, 0 pause input 0 input packets with dribble condition detected 266847492390 packets output, 126920162139661 bytes, 0 underruns 0 output errors, 0 collisions, 3 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 pause output 0 output buffer failures, 0 output buffers swapped out 6807v#show interfaces vlan 18 Vlan18 is up, line protocol is up Hardware is EtherSVI, address is 58ac.XXXX.ZZZZ (bia 58ac.XXXX.ZZZZ) Internet address is 10.10.10.17/29 MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec, reliability 255/255, txload 33/255, rxload 228/255 Encapsulation ARPA, loopback not set Keepalive not supported ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:00:01, output never, output hang never Last clearing of "show interface" counters never Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 897149000 bits/sec, 91614 packets/sec 5 minute output rate 132671000 bits/sec, 50802 packets/sec L2 Switched: ucast: 292321 pkt, 113431929 bytes - mcast: 0 pkt, 0 bytes L3 in Switched: ucast: 408289180926 pkt, 450318124595958 bytes - mcast: 0 pkt, 0 bytes L3 out Switched: ucast: 266835192255 pkt, 126919239555193 bytes - mcast: 0 pkt, 0 bytes 408294223867 packets input, 448684444897526 bytes, 0 no buffer Received 0 broadcasts (0 IP multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 266838891321 packets output, 125851562798305 bytes, 0 underruns 0 output errors, 0 interface resets 0 unknown protocol drops 0 output buffer failures, 0 output buffers swapped out 6807v#show ip eigrp topology 0.0.0.0 EIGRP-IPv4 Topology Entry for AS(1)/ID(10.10.254.250) for 0.0.0.0/0 State is Passive, Query origin flag is 1, 1 Successor(s), FD is 5376 Descriptor Blocks: 10.10.10.18 (Vlan18), from 10.10.10.18, Send flag is 0x0 Composite metric is (5376/5120), route is External Vector metric: Minimum bandwidth is 1000000 Kbit Total delay is 110 microseconds Reliability is 255/255 Load is 25/255 Minimum MTU is 1500 Hop count is 1 External data: Originating router is 10.10.254.252 AS number of route is QQQQ External protocol is BGP, external metric is 0 Administrator tag is 100 (0x00000064)
My understanding is that txload/rxload are not measured against actual interface speed but against BW.
To solve, is it simply a matter of increasing VLAN interface's BW?
R's, Alex
Solved! Go to Solution.
05-27-2019 12:15 AM
Hello Alex,
EIGRP real implementation of load component has never been complete.
As reported in the forums by some Cisco experts (Russ White and later Peter Paluch) the load is not dynamically updated, but it should be taken the value that was on the interface at the moment EIGRP started over it (it is a snapshot a static value)
The reason for not dynamically updating the load component is simple: if the load would change and it is part of the composite metric (appropriate K value set) this could lead to unstable networks as traffic would be moved from a path to another and back because of the changing load on each path.
All the EIGRP descriptions telling about the capability to route using also load as a parameter have to be considered just marketing.
The same applies to reliability.
These components should never be enabled in a production network and are technically considered relic inherited from old protocol IGRP ( from Peter Paluch's book CCIE Routing & Switching Volume I).
So I am surprised of what you have seen a change in EIGRP load metric, it could happen only if the SVI interface had flapped up -> down -> up. But I do not think this can happen just with the command bandwidth applied to it.
However, in theory only that is the case in which the load can be updated.
Hope to help
Giuseppe
05-24-2019 03:07 AM
Hello Alex,
using the bandwidth command under an SVI can be allowed, speed command should be unsupported ad it is not a physical interface.
I am not sure about what txload/rxload should refer to, but for an SVI logical interface again the bandwidth should be the only the parameter for comparison.
Hope to help
Giuseppe
05-26-2019 05:39 PM - edited 05-26-2019 05:40 PM
Hi Giuseppe,
I increased VLAN interface's 'configured bandwidth' and this reduced the txload.
However, it increased the EIGRP route 'load' metric - from 25/255 to 45/255:
6807v#show ip eigrp topology 0.0.0.0 EIGRP-IPv4 Topology Entry for AS(1)/ID(10.10.254.250) for 0.0.0.0/0 State is Passive, Query origin flag is 1, 1 Successor(s), FD is 5376 Descriptor Blocks: 10.10.10.18 (Vlan18), from 10.10.10.18, Send flag is 0x0 Composite metric is (5376/5120), route is External Vector metric: Minimum bandwidth is 1000000 Kbit Total delay is 110 microseconds Reliability is 255/255 Load is 45/255 Minimum MTU is 1500 Hop count is 1 External data: Originating router is 10.10.254.252 AS number of route is QQQQ External protocol is BGP, external metric is 0 Administrator tag is 100 (0x00000064) 6807v#show ip protocols : Routing Protocol is "eigrp 1" : EIGRP-IPv4 Protocol for AS(1) Metric weight K1=1, K2=0, K3=1, K4=0, K5=0 :
As you see above, K2=0, so, this won't affect EIGRP composite metric, however, it's unexpected.
What's the reason for observed increase?
R's, Alex
05-27-2019 12:15 AM
Hello Alex,
EIGRP real implementation of load component has never been complete.
As reported in the forums by some Cisco experts (Russ White and later Peter Paluch) the load is not dynamically updated, but it should be taken the value that was on the interface at the moment EIGRP started over it (it is a snapshot a static value)
The reason for not dynamically updating the load component is simple: if the load would change and it is part of the composite metric (appropriate K value set) this could lead to unstable networks as traffic would be moved from a path to another and back because of the changing load on each path.
All the EIGRP descriptions telling about the capability to route using also load as a parameter have to be considered just marketing.
The same applies to reliability.
These components should never be enabled in a production network and are technically considered relic inherited from old protocol IGRP ( from Peter Paluch's book CCIE Routing & Switching Volume I).
So I am surprised of what you have seen a change in EIGRP load metric, it could happen only if the SVI interface had flapped up -> down -> up. But I do not think this can happen just with the command bandwidth applied to it.
However, in theory only that is the case in which the load can be updated.
Hope to help
Giuseppe
05-27-2019 12:55 AM
Hi Giuseppe,
> it could happen only if the SVI interface had flapped up -> down -> up. But I do not think this can happen just with the command bandwidth applied to it.
I checked logging buffer - no sign of flapped interface.
Thanks for your help.
R's, Alex
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide