02-11-2011 04:51 AM - edited 03-06-2019 03:30 PM
Hello,
I would like to ask if anybody has experience with slow response of VLAN interface on L3 switch (3750-X)
I have two pairs of 3750-X switches. Every pair is configured in stack. This two stacks are connected together via 2 TenGigabitEthernet ports and are configured as trunk dot1q.
I created new VLAN on my cisco 3750-X swicth, created new VLAN interface and assigned appropriate IP address. When i try to ping to VLAN interface so I notice little bit slow response (time to time).
Reply from 10.252.20.254: bytes=32 time=1ms TTL=255
Reply from 10.252.20.254: bytes=32 time=36ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time=3ms TTL=255
Reply from 10.252.20.254: bytes=32 time=28ms TTL=255
Reply from 10.252.20.254: bytes=32 time=6ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Reply from 10.252.20.254: bytes=32 time<1ms TTL=255
Do you think it is normal or is it possible somehow to solve this slow responses? Because if I ping two connected servers in sam VLAN so their response is under 1ms all the time.
So why VLAN interface has sometimes this slow response?
Please se config below:
Current configuration : 17783 bytes
!
version 12.2
no service pad
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
no service dhcp
!
hostname SomeHostName
!
boot-start-marker
boot-end-marker
!
no aaa new-model
switch 1 provision ws-c3750x-48
switch 2 provision ws-c3750x-48
system mtu routing 1500
!
!
no ip domain-lookup
ip domain-name some.domain.com
!
!
crypto pki trustpoint TP-self-signed-3107166720
enrollment selfsigned
subject-name cn=IOS-Self-Signed-Certificate-3107166720
revocation-check none
rsakeypair TP-self-signed-3107166720
!
spanning-tree mode pvst
spanning-tree extend system-id
!
!
!
!
vlan internal allocation policy ascending
!
ip ssh version 2
!
!
interface FastEthernet0
ip address 192.168.1.13 255.255.255.0
!
interface GigabitEthernet1/0/1
switchport access vlan 20
switchport mode access
switchport nonegotiate
no cdp enable
no cdp tlv server-location
no cdp tlv app
!
interface GigabitEthernet1/0/2
switchport access vlan 20
switchport mode access
switchport nonegotiate
no cdp enable
no cdp tlv server-location
no cdp tlv app
!
interface GigabitEthernet1/0/3
switchport access vlan 20
switchport mode access
switchport nonegotiate
no cdp enable
no cdp tlv server-location
no cdp tlv app
!
interface GigabitEthernet1/0/4
switchport access vlan 20
switchport mode access
switchport nonegotiate
no cdp enable
no cdp tlv server-location
no cdp tlv app
!
interface GigabitEthernet1/0/5
switchport access vlan 20
switchport mode access
switchport nonegotiate
no cdp enable
no cdp tlv server-location
no cdp tlv app
!
!
!
!
!
!
interface GigabitEthernet1/0/47
switchport access vlan 21
!
interface GigabitEthernet1/0/48
switchport access vlan 21
!
interface GigabitEthernet1/1/1
!
interface GigabitEthernet1/1/2
!
interface GigabitEthernet1/1/3
!
interface GigabitEthernet1/1/4
!
interface TenGigabitEthernet1/1/1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 20,21
switchport mode trunk
switchport nonegotiate
!
!
!
!
!
interface TenGigabitEthernet1/1/2
!
interface GigabitEthernet2/0/1
switchport access vlan 20
switchport mode access
switchport nonegotiate
no cdp enable
no cdp tlv server-location
no cdp tlv app
!
interface GigabitEthernet2/0/2
switchport access vlan 20
switchport mode access
switchport nonegotiate
no cdp enable
no cdp tlv server-location
no cdp tlv app
!
interface GigabitEthernet2/0/3
switchport access vlan 20
switchport mode access
switchport nonegotiate
no cdp enable
no cdp tlv server-location
no cdp tlv app
!
interface GigabitEthernet2/0/4
switchport access vlan 20
switchport mode access
switchport nonegotiate
no cdp enable
no cdp tlv server-location
no cdp tlv app
!
interface GigabitEthernet2/0/5
switchport access vlan 20
switchport mode access
switchport nonegotiate
no cdp enable
no cdp tlv server-location
no cdp tlv app
!
interface GigabitEthernet2/0/6
switchport access vlan 20
switchport mode access
switchport nonegotiate
no cdp enable
no cdp tlv server-location
no cdp tlv app
!
interface TenGigabitEthernet2/1/1
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 20,21
switchport mode trunk
switchport nonegotiate
!
interface TenGigabitEthernet2/1/2
!
interface Vlan1
no ip address
shutdown
!
interface Vlan20
ip address 10.252.20.252 255.255.255.0
standby 1 ip 10.252.20.254
standby 1 priority 101
standby 1 preempt
standby 1 track 1 decrement 10
!
interface Vlan21
ip address 10.252.21.252 255.255.255.0
standby 1 ip 10.252.21.254
standby 1 priority 101
standby 1 preempt
standby 1 track 2 decrement 10
!
no ip http server
ip http secure-server
ip sla enable reaction-alerts
!
!
line con 0
login local
line vty 0 4
exec-timeout 0 0
login local
transport input ssh
line vty 5 15
login
!
end
Thank you for your help :-)
Jan.
Solved! Go to Solution.
02-15-2011 09:26 AM
Hi Jan,
As mentioned above, there is no issue with the cpu at 21% and the Hulc LED Process taking around 10% of the cpu is normal.
You can check if you hit the following bug:
CSCsi78581 Hulc LED Process uses 10-15% CPU on Catalyst 3750/3560.
Symptom:
CPU utilization for Hulc LED Process of around 10-15% is seen on Cisco Catalyst 3750 and 3560 switches, even with no ports connected. show process cpu shows higher than expected CPU utilzation.
Conditions:
This issue may be seen on a Catalyst 3750 or 3560 switch, typically with no ports connected.
Workaround:
None.This is a minor issue and does not affect the hardware forwarding performance of the switch.
Unless you see an impact on the network, you need not worry about the ping response and cpu level of the switch.
Hope that helps.
Cheers,
Abhishek.
02-11-2011 05:20 AM
Hi Jan,
So i understand that you are seeing slow ping responses when you initiate pings to the newly created vlan. I would like to inform you that if there is no current indication of any problem on the network, an intermittent increase in ping response is expected as ICMP is treated as low priority. Many more-important tasks have precedence over ping response generation. Therefore, ping response times (average) of 7-10 ms are typical, even on a completely idle switch. On a particularly busy switch, response times can be even longer.
Does that answer your question? What is the average ping response time that you see? I do not see the ping response output that you have attached causing any issues on the switch / network.
Hope that helps!
Cheers,
Abhishek.
02-11-2011 05:49 AM
Hi Abhishek,
thank you for reply. Yes it partly answers my question. Regarding CPU load on the switches, there is also little bit bigger load than expected.
Also there is still no big traffic. New servers are just connected to switches but most of them are not configuerd yet.
So it is confuses me.
No traffic = 21 percent CPU load (no traffic means, just DHCP Discover transaction and HSRP hello packet and STP protocol )
and
No traffic = average 3ms ping response from 500 ping's.
So do you thing that it is normal behavior of 3750 switches?
Thank's in advance.
Jan.
02-11-2011 05:59 AM
Hi Jan,
A ping response average of 3 ms is not something you should worry about. Regarding the cpu, 21% is not high. Please attach the output of 'show proc cpu sorted | e 0.00%' so that we can see which process is causing the cpu to be at 21%.
Cheers,
Abhishek.
02-14-2011 05:03 AM
Hi abhischa,
sorry for late answer here is show process cpu sorted. it is not complete but it is sorted so highest utilization is up. As you can notice so Hulc LED Process is taking most of CPU time. What I do not understand again :-) Is it possible that it is because server machines are not installed yet so their network adapters go up and down again and again because machies reboot if system is not found, and it take CPU time on switch?
What is true, that in log is many messages about ifdown and ifup and also LED's go to orange and green color very frequently.
CPU utilization for five seconds: 25%/0%; one minute: 24%; five minutes: 23%
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
139 78657 14470 5435 7.98% 10.14% 9.94% 0 Hulc LED Process
67 22048 3758 5866 2.87% 2.80% 2.50% 0 RedEarth Tx Mana
106 11359 1381 8225 1.75% 1.65% 1.45% 0 hpm counter proc
8 1012 13 77846 1.75% 0.21% 0.12% 0 Licensing Auto U
232 9297 36025 258 1.43% 1.44% 1.10% 0 HSRP Common
87 2497 18271 136 1.43% 0.74% 0.36% 0 HLFM address lea
145 530 291 1821 1.11% 0.31% 0.12% 1 SSH Process
180 192 1287 149 0.31% 0.07% 0.01% 0 IP ARP Track
149 2291 177 12943 0.31% 0.31% 0.30% 0 HQM Stack Proces
89 505 18409 27 0.15% 0.11% 0.04% 0 HLFM address ret
150 853 518 1646 0.15% 0.15% 0.12% 0 HRPC qos request
66 1047 5678 184 0.15% 0.11% 0.11% 0 RedEarth I2C dri
86 32 519 61 0.15% 0.01% 0.00% 0 HRPC hlfm reques
178 339 2811 120 0.15% 0.05% 0.02% 0 IP Input
192 426 1470 289 0.15% 0.03% 0.03% 0 Spanning Tree
78 325 3249 100 0.15% 0.04% 0.00% 0 yeti2_emac_proce
16 0 14 0 0.00% 0.00% 0.00% 0 IPC Dynamic Cach
17 0 1 0 0.00% 0.00% 0.00% 0 IPC Zone Manager
15 0 1 0 0.00% 0.00% 0.00% 0 IFS Agent Manage
20 0 792 0 0.00% 0.00% 0.00% 0 IPC Deferred Por
18 0 792 0 0.00% 0.00% 0.00% 0 IPC Periodic Tim
PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process
19 0 7 0 0.00% 0.00% 0.00% 0 IPC Managed Time
23 0 3 0 0.00% 0.00% 0.00% 0 PrstVbl
24 0 1 0 0.00% 0.00% 0.00% 0 Crash writer
14 59 13 4538 0.00% 0.00% 0.00% 0 Entity MIB API
26 0 1 0 0.00% 0.00% 0.00% 0 License IPC serv
27 0 665 0 0.00% 0.00% 0.00% 0 GraphIt
21 8 27 296 0.00% 0.00% 0.00% 0 IPC Seat Manager
22 0 1 0 0.00% 0.00% 0.00% 0 IPC Session Serv
13 0 1 0 0.00% 0.00% 0.00% 0 Policy Manager
31 0 792 0 0.00% 0.00% 0.00% 0 Dynamic ARP Insp
32 0 1 0 0.00% 0.00% 0.00% 0 Critical Bkgnd
33 183 1410 129 0.00% 0.01% 0.00% 0 Net Background
34 0 1 0 0.00% 0.00% 0.00% 0 IDB Work
35 17 262 64 0.00% 0.00% 0.00% 0 Logger
36 0 661 0 0.00% 0.00% 0.00% 0 TTY Background
25 0 3 0 0.00% 0.00% 0.00% 0 License IPC stat
12 0 2 0 0.00% 0.00% 0.00% 0 AAA high-capacit
39 16 8 2000 0.00% 0.00% 0.00% 0 IF-MGR control p
40 0 183 0 0.00% 0.00% 0.00% 0 IF-MGR event pro
41 0 1 0 0.00% 0.00% 0.00% 0 Inode Table Dest
42 0 12 0 0.00% 0.00% 0.00% 0 Net Input
43 48 136 352 0.00% 0.00% 0.00% 0 Compute load avg
Thank you very much.
Jan.
02-14-2011 05:23 AM
Hi Jan,
The CPU utilisation on the switch is absolutlely fine.
The HULC LED process it takes around 15-20 % CPU is perfectly normal , that is a known issue with teh 3750 sesries swicthes.
From the output there is no particular process which is taking large CPU cycles.
Rebooting of devices connected to the switches doesnt have anything to do with CPU cycles .
The CPU utilisation of around 50 % and above is considered as high , below than that you dont need to worry.
HTH
Please rate helpful posts
Regards,
Swati
02-14-2011 06:46 AM
Hulc LED process usage for this platform is perfectly normal. I would be worrying if you saw it any more than what you are seeing now (typlically at around 50% or so).
02-15-2011 09:26 AM
Hi Jan,
As mentioned above, there is no issue with the cpu at 21% and the Hulc LED Process taking around 10% of the cpu is normal.
You can check if you hit the following bug:
CSCsi78581 Hulc LED Process uses 10-15% CPU on Catalyst 3750/3560.
Symptom:
CPU utilization for Hulc LED Process of around 10-15% is seen on Cisco Catalyst 3750 and 3560 switches, even with no ports connected. show process cpu shows higher than expected CPU utilzation.
Conditions:
This issue may be seen on a Catalyst 3750 or 3560 switch, typically with no ports connected.
Workaround:
None.This is a minor issue and does not affect the hardware forwarding performance of the switch.
Unless you see an impact on the network, you need not worry about the ping response and cpu level of the switch.
Hope that helps.
Cheers,
Abhishek.
02-16-2011 07:56 AM
Hi Abhishek,
thank you for investigation. Seems it is really bug, what can very confuse customers. From my point of view it should not happen.
Cheers,
Jan.
02-16-2011 08:22 AM
Yes Jan, i understand your concern. Unfortunately a few bugs do exist in all forms of code. Luckily for us we were able to find out the bug that we were hitting:)
Cheers
-Abhishek.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: