Hi, I'm currently in the process of moving L3s down into ACI for the 2nd pod in my network. Unfortunately, I have done a few test BDs and I have reports of absolutely horrendous performance after the L3 has been put in ACI. Notable, file transfers and replication basically cease immediately. That said, ICMP and connectivity still works... This is true on Inter-pod, inter-VRF, inter-BD, as well as inter-pod, intra-VRF and intra-BD. I've checked non-ACI devices for speed, interface drops, MTU and all looks fine I've also completed some basic pcaps on the IPNs (9Ks) but the output only goes as deep as the first IP header. I do see lots of UDP (VXLAN I guess) as well as one TCP flow, which sees a far amount of retransmissions, and lost segments. Any Ninjas out there fancy pointing me in an direction? Btw, due to the nature of the business I am in, to remain compliant, I have no third party applications anywhere near this network i.e Wireshark & Pingplotter/
... View more
Hi, Just curious if anyone has found a tool/utility in ACI that can be used to check if ports are open on remote devices. Such as telnetting on a port? This was always a useful tool to me on non-ACI equipment so it would be a shame if Cisco depreciated? Ben
... View more
Hi Richmond, Thanks for getting back. Everything in terms of ACI was correct. EPG, BD etc. My particular set up involved the insertion of a another device between the IPNs. It was this device which was missing the multicast configuration. Managed to figure it out this morning. I suppose my confusion came from being unable to ping between hosts in different pods, but being able to ping across pods to a host from the pervasive GW in the other pod. I neglected that this sort of communication wouldn't require multicast.
... View more
Hi, ACI is very new to me, so may have to excuse my ignorance! I'm currently troubleshooting an issue where hosts in the same BD, but in different pods, aren't able to ping. I've had a small amount of exposure to VxLAN and EVPN and have encountered similar issues which turned out to be multicast related. However, this issues is different in that I am able to ping from anycast gateway to a host in the remote pod. Does anyone have experience/encountered similar issues when migrating to ACI?
... View more
Sorry about that, I removed the BGP config, there is a BGP learnt default route. I also omitted the access-lists that permit traffic on the vlans, as it was just permitting /24 subnets for the VLANs.
GigabitEthernet1/1 is up, line protocol is up (connected) Hardware is Gigabit Ethernet, address is Description: WAN interface Internet address is MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive not set Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP input flow-control is off, output flow-control is unsupported ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:00:00, output 00:00:00, output hang never Last clearing of "show interface" counters 3d21h Input queue: 1/75/0/0 (size/max/drops/flushes); Total output drops: 2 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 347000 bits/sec, 123 packets/sec 5 minute output rate 390000 bits/sec, 120 packets/sec 581276871 packets input, 403227429391 bytes, 0 no buffer Received 0 broadcasts (0 IP multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 0 multicast, 0 pause input 0 input packets with dribble condition detected 532501331 packets output, 355693860967 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 PAUSE output 0 output buffer failures, 0 output buffers swapped out
... View more
Thanks for taking the time.
So there is QoS configured, I have turned it off and retried the tracers but I get the same results. Here is the config:
System returned to ROM by power-on System restarted at 07:42:05 UTC Fri Mar 23 2018 System image file is "flash:/c3560e-universalk9-mz.122-55.SE10/c3560e-universalk9-mz.122-55.SE10.bin"
This product contains cryptographic features and is subject to United States and local country laws governing import, export, transfer and use. Delivery of Cisco cryptographic products does not imply third-party authority to import, export, distribute or use encryption. Importers, exporters, distributors and users are responsible for compliance with U.S. and local country laws. By using this product you agree to comply with applicable laws and regulations. If you are unable to comply with U.S. and local laws, return this product immediately.
A summary of U.S. laws governing Cisco cryptographic products may be found at: http://www.cisco.com/wwl/export/crypto/tool/stqrg.html
If you require further assistance please contact us by sending email to firstname.lastname@example.org.
License Level: ipservices License Type: Permanent Next reload license Level: ipservices
cisco WS-C3560X-24 (PowerPC405) processor (revision R0) with 262144K bytes of memory. Last reset from power-on 21 Virtual Ethernet interfaces 1 FastEthernet interface 28 Gigabit Ethernet interfaces 2 Ten Gigabit Ethernet interfaces
512K bytes of flash-simulated non-volatile configuration memory.
Model number : WS-C3560X-24T-E
Switch Ports Model SW Version SW Image ------ ----- ----- ---------- ---------- * 1 30 WS-C3560X-24 12.2(55)SE10 C3560E-UNIVERSALK9-M
Configuration register is 0xF
! version 12.2 no service pad service timestamps debug datetime msec service timestamps log datetime msec no service password-encryption !
! boot-start-marker boot-end-marker ! logging buffered 8192
! ! ! aaa new-model ! ! ! ! ! aaa session-id common system mtu routing 1500 ip routing
mls qos map cos-dscp 0 8 16 24 32 41 46 48 mls qos srr-queue input priority-queue 1 bandwidth 10 mls qos !
spanning-tree mode pvst spanning-tree extend system-id ! ! ! ! vlan internal allocation policy ascending ! ! class-map match-all EF match ip dscp ef ! ! ! ! interface FastEthernet0 no ip address no ip route-cache cef no ip route-cache no ip mroute-cache ! interface GigabitEthernet0/1 switchport trunk encapsulation dot1q switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203 switchport mode trunk mls qos trust dscp ! interface GigabitEthernet0/2 switchport trunk encapsulation dot1q switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203 switchport mode trunk mls qos trust dscp ! interface GigabitEthernet0/3 switchport trunk encapsulation dot1q switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203 switchport mode trunk mls qos trust dscp ! interface GigabitEthernet0/4 switchport trunk encapsulation dot1q switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203 switchport mode trunk mls qos trust dscp ! interface GigabitEthernet0/5 shutdown ! interface GigabitEthernet0/6 shutdown ! interface GigabitEthernet0/7 shutdown ! interface GigabitEthernet0/8 shutdown ! interface GigabitEthernet0/9 ! interface GigabitEthernet0/10 shutdown ! interface GigabitEthernet0/11 shutdown ! interface GigabitEthernet0/12 shutdown ! interface GigabitEthernet0/13 ! interface GigabitEthernet0/14 switchport trunk encapsulation dot1q switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203 switchport mode trunk mls qos trust dscp ! interface GigabitEthernet0/15 switchport trunk encapsulation dot1q switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203 switchport mode trunk mls qos trust dscp ! interface GigabitEthernet0/16 switchport trunk encapsulation dot1q switchport trunk allowed vlan 1-8,11,22,33,44,55,77,88,102-106,203 switchport mode trunk mls qos trust dscp ! interface GigabitEthernet0/17 shutdown ! interface GigabitEthernet0/18 shutdown ! interface GigabitEthernet0/19 shutdown ! interface GigabitEthernet0/20 shutdown ! interface GigabitEthernet0/21 shutdown ! interface GigabitEthernet0/22 shutdown ! interface GigabitEthernet0/23 shutdown ! interface GigabitEthernet0/24 shutdown ! interface GigabitEthernet1/1 description WAN interface no switchport ip address priority-queue out mls qos trust dscp ! interface GigabitEthernet1/2 ! interface GigabitEthernet1/3 ! interface GigabitEthernet1/4 ! interface TenGigabitEthernet1/1 ! interface TenGigabitEthernet1/2 ! interface Vlan12 ip address 192.168.1.1 255.255.255.0 ip access-group 110 in ! interface Vlan21 ip address 192.168.2.1 255.255.255.0 ip access-group 125 in ! interface Vlan31 ip address 192.168.3.1 255.255.255.0 ip access-group 135 in ! interface Vlan41 ip address 192.168.8.1 255.255.255.0 ip access-group 145 in ! interface Vlan51 ip address 192.168.5.1 255.255.255.0 ip access-group 155 in ! interface Vlan61 ip address 192.168.6.1 255.255.255.0 ip access-group 165 in ! interface Vlan72 ip address 10.0.104.1 255.255.255.0 secondary ip address 10.0.102.1 255.255.254.0 ! interface Vlan84 ip address 172.16.172.1 255.255.255.0 ! interface Vlan114 ip address 10.66.11.1 255.255.255.0 ip access-group 115 in ! interface Vlan221 ip address 10.66.22.1 255.255.255.0 ! interface Vlan332 ip address 10.66.33.1 255.255.255.0 ! interface Vlan443 ip address 10.66.44.1 255.255.255.0 ! interface Vlan554 ip address 10.66.55.1 255.255.255.0 ip access-group 175 in ! interface Vlan771 ip address 10.66.77.1 255.255.255.0 ip access-group 185 in ! interface Vlan881 ip address 10.66.88.1 255.255.255.0 ! interface Vlan112 ip address 10.66.102.1 255.255.255.0 ! interface Vlan113 ip address 10.66.103.1 255.255.255.0 ! interface Vlan114 ip address 10.66.104.1 255.255.255.0 ! interface Vlan115 ip address 10.66.105.1 255.255.255.0 ! interface Vlan116 ip address 10.66.106.1 255.255.255.0 ! interface Vlan213 ip address 10.66.203.1 255.255.255.0
I get a bunch of 'encapsulation failed', 'no adjacency' & 'no route' drops under #sh ip traffic, not sure how pertinent that is.
... View more
Looking for a bit of guidance.
I have a customer that is using a Catalyst as their CPE, it also appears they are using this device as one of their core switches as there are over 20+ VLANs connected to the device through a number of trunks. The customer is reporting slow performance, this is evident from a large number of drops on the output queue of one of the trunks. Cable has been replaced but is still re-occuring.
I conducted a number of Ping Plotter tracers, the tracers take the same path, but obviously 3 & 4 have an additional hop. All were conducted at the same time. The four tracers were:
1. To the WAN IP of the CPE
2. To one of the SViI's IP on the CPE
3. Host IP on one of the VLANs
4. Another host IP on a different VLAN.
I conducted these persistent tracers over an hour or so, and for tracers 3 & 4 I was seeing up to 25% packet loss, however the packet loss was occurring at the WAN IP, but at the same time, for tracers 1 & 2 there was no packet loss at the WAN IP, and no packet loss overall?
I need to find some sort of distinction here. My peers think this could be the Catalyst is low on resources in respect to operating as a switch & a router, i.e some sort of contention for resources, however I need to provide some evidence to support this.
If anyone can shed some light as to what may be happening that would be appreciated.
... View more
So, the example call is to the same number but it also occurs on calls to other numbers. The supplier that manages the call manager states the issue looks to be 16.1 not recieving the OLC, which is consistent with what you say. The drama I have is not being able to run a pcap on the remote site (as it's a zyxel) nor on any routers along the path, as I haven't got the priveledges to do so. Any thoughts? Ben
... View more
Thank you for taking the time to message me back.
I have attached another snip which covers the issue I described. I.e when a call is made in quick succession of finishing one, it fails.
... View more