05-19-2016 08:58 AM - edited 03-08-2019 05:50 AM
Hi All,
I am facing a packet loss issue for few public ip's of around 40-60%.
I have force 10 core router for eBGP peering and below which lies my cisco catalyst C6509-NEB-A.
A TenGigabitEthernet connets the two port. Two subinterfaces. I will paste the config below.
Traceroute shows some strange packet drops between catalyst and Force10.
The bunch of intermittent IP's and working ip's are learning the same route via E-300 via IBGP.
Following are different running configs and routes learned collected from Catalyst:
TenGigabitEthernet2/2 unassigned YES NVRAM up up
TenGigabitEthernet2/2.2007 2tow_E3008 YES NVRAM up up
TenGigabitEthernet2/2.2012 unassigned YES unset up up
TenGigabitEthernet2/2.2024 10.16.0.113 YES NVRAM up up
TenGigabitEthernet2/2 is up, line protocol is up (connected)
Hardware is C6k 10000Mb 802.3, address is xxxxxxxxxxxxxxxxxxx
Description: cr1002.snv1:xg0/0, 10GE
MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation 802.1Q Virtual LAN, Vlan ID 1., loopback not set
Keepalive set (10 sec)
Full-duplex, 10Gb/s
input flow-control is on, output flow-control is on
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 05:57:35
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 20859000 bits/sec, 6788 packets/sec
5 minute output rate 56522000 bits/sec, 8922 packets/sec
L2 Switched: ucast: 0 pkt, 0 bytes - mcast: 0 pkt, 0 bytes
L3 in Switched: ucast: 0 pkt, 0 bytes - mcast: 0 pkt, 0 bytes mcast
L3 out Switched: ucast: 0 pkt, 0 bytes mcast: 0 pkt, 0 bytes
138879419 packets input, 56747867539 bytes, 0 no buffer
Received 10319 broadcasts (0 IP multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
187724775 packets output, 145812194266 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
TenGigabitEthernet2/2.2007 is up, line protocol is up (connected)
Hardware is C6k 10000Mb 802.3, address is xxxxxxxxxxxxxxxxxxxxxx
Description: cr1002.snv1:xg0/0.2007, dot1q subif, 2tow_E3008/30
Internet address is 2tow_E3008/30
MTU 1500 bytes, BW 10000000 Kbit, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation 802.1Q Virtual LAN, Vlan ID 2007.
ARP type: ARPA, ARP Timeout 04:00:00
ag1001.snv1#
ag1001.snv1#sh ip route 207.86.215.17
Routing entry for 207.86.215.0/24
Known via "bgp myAS", distance 200, metric 0
Tag ISPAS, type internal
Last update from 6eBGPpeer49 04:27:39 ago
Routing Descriptor Blocks:
* 6eBGPpeer49, from 2.LoopE300.4, 04:27:39 ago
Route metric is 0, traffic share count is 1
AS Hops 2
Route tag ISPAS
ag1001.snv1#sh ip rou 198.173.3.56
Routing entry for 198.172.0.0/15, supernet
Known via "bgp myAS", distance 200, metric 0
Tag ISPAS, type internal
Last update from 6eBGPpeer49 04:27:59 ago
Routing Descriptor Blocks:
* 6eBGPpeer49, from 2.LoopE300.4, 04:27:59 ago
Route metric is 0, traffic share count is 1
AS Hops 2
Route tag ISPAS
where 198.173.3.56 is the pingable ip and 207.86.215.17 is the troubled one.
Please let me know what else is required.
05-19-2016 10:18 AM
Hi,
Not familiar with Force10, but the Cisco 6500 appears to be running in layer-2 only with trunk connecting to the Force10. Can you post trace routes to the same IPs from the Force10 router?
What is the output of:
sh run int te2/2.2007, 2012 and 2024?
HTH
05-19-2016 01:20 PM
Hi Reza,
Trace route is clear and pings are all success from force10.
Below is the required output:
te2/2.2012 can be removed, but I dont think that could be an issue as it has nothing configured, roght?
interface TenGigabitEthernet2/2.2007
description cr1002.snv1:xg0/0.2007, dot1q subif, 208.xx.xx58/30
encapsulation dot1Q 2007
ip address 208.xx.xx.58 255.255.255.252
no ip proxy-arp
ip ospf message-digest-key 1 md5 7 xxxxxxxxxxxxxxx
ip ospf network point-to-point
ip ospf cost 5
end
ag1001.snv1#sh run int te2/2.2012
Building configuration...
Current configuration : 44 bytes
!
interface TenGigabitEthernet2/2.2012
end
ag1001.snv1#sh run int te2/2.2024
Building configuration...
Current configuration : 321 bytes
!
interface TenGigabitEthernet2/2.2024
description cr1002.snv1:xg0/0.2024, dot1q subif, 10.xx.xx.113/28
encapsulation dot1Q 2024
ip vrf forwarding MGMT
ip address 10.xx.xx.113 255.255.255.240
no ip proxy-arp
standby 1 ip 10.xx.xx.126
standby 1 priority 105
standby 1 preempt
standby 1 authentication xxxxxxx
end
05-19-2016 01:20 PM
Hi,
Where is the gateway for the user (LAN) side located? Is it on the 6500?
Are you doing OSPF between Force10 and the 6500?
Do you have 2 6500 and 2 Force10 or just one of each?
Can you post "sh run" from the 6500?
Also, can you do a trace route and ping to 8.8.8.8 from the 6500 and post the results?
HTH
05-20-2016 03:39 AM
Yes, we have 2 6500's and 2 Force10's.
We are using OSPF internally.
But the things being shared is from the primary of the 6500's. Also, there is no uplink from the other Force10, so it is not advertising any external routes.
the gateway to all internal user is this catalyst..
Tracing the route to google-public-dns-a.google.com (8.8.8.8)
1 vl2007.cr1002.snv1 (20x.xx.17.57) 0 msec 0 msec 16 msec
2 6x.xx.xx49 [AS 3561] 0 msec 0 msec 0 msec
3 hr2-v3059.sc8.savvis.net (209.1.56.9) [AS 3561] 0 msec 0 msec 0 msec
4 er1-te-7-0-1.SanJoseEquinix.savvis.net (204.70.200.213) [AS 3561] 0 msec 16 msec 4 msec
5 sjp-brdr-05.inet.qwest.net (63.235.40.205) [AS 209] 0 msec 4 msec 0 msec
6 snj-edge-04.inet.qwest.net (67.14.34.86) 0 msec
snj-edge-04.inet.qwest.net (67.14.34.82) 0 msec
snj-edge-04.inet.qwest.net (67.14.34.86) 0 msec
7 65.113.96.130 [AS 209] 4 msec 4 msec 0 msec
8 66.249.95.63 [AS 15169] 4 msec
209.85.244.25 [AS 15169] 4 msec
66.249.95.63 [AS 15169] 4 msec
9 216.239.49.89 [AS 15169] 4 msec
216.239.49.97 [AS 15169] 4 msec
216.239.49.93 [AS 15169] 4 msec
10 google-public-dns-a.google.com (8.8.8.8) [AS 15169] 4 msec 0 msec 4 msec
05-20-2016 07:00 AM
Hi,
I am learning same MAC addresses on Force10 of the two catalyst switches on different VLAN's
Could that be causing issue for the packet drops?
Internet 20x.xx.x7 - 00:01:e8:5e:8c:35 - Vl 2007 CP
Internet 20x.xx.x8 39 00:1c:0f:5d:86:00 Te 0/0 Vl 2007 CP
Internet 20x.xx.x1 - 00:01:e8:5e:8c:35 - Vl 2008 CP
Internet 20x.xx.x2 224 00:1c:0f:5d:76:80 Te 0/1 Vl 2008 CP
05-24-2016 12:00 PM
One more thing,
The ip's which we were unable to ping from catalysts are now pingable, but getting packet loss for other ip's now. They were down for approximately two days. That's not even for block's but some random public ip's!
06-07-2016 08:20 AM
Hi Reza,
The issue got resolved by itself, RFO is still unknown.
Can lack of FIB table cache be the reason for some ip's to be intermittent?
We found the cache is of 512k whereas we are learning around 550k routes through BGP peer.
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide