11-21-2012 07:48 AM - edited 03-01-2019 04:52 PM
I had a situation where a 3560 switch (10.x.x.3) was showing down at a remote site on Solar Winds. I could ping it from my desktop, could ping it from the core switch, could ping it from the remote site MPLS WAN router, but could not ping the switch from our Solar Winds server (10.x.x.113).
This switch has no static routes, just a default route to the WAN router at 10.x.x.1.
Did a show ip route and got the below, which explains why the 3560 switch (10.x.x.3) can't be reached by the Solar Winds server at 10.x.x.113, the 3560 switch (10.x.x.3) has a route for the Solar Winds IP 10.x.x.113 pointing to gateway 10.x.x.2 (which happens to be the inside interface for the ASA VPN backup link to the corporate office, which isn't active because the MPLS link is up and routing traffic via MPLS). See the items highlighted below in red.
remote_site_switch#show ip route
Default gateway is 10.x.x.1 <------------This is accurate, that's the IP address of the MPLS WAN Router.
Host Gateway Last Use Total Uses Interface
10.x.x.113 10.x.x.2 0:00 2197 Vlan1
Then did a show ip redirects on the remote_site_switch and got the below:
remote_site_switch#show ip redirects
Default gateway is 10.x.x.1
Host Gateway Last Use Total Uses Interface
10.x.x.113 10.x.x..2 0:00 2235 Vlan1
Did a show ip redirects on the MPLS_WAN_Router and got the below:
MPLS_WAN_Router#show ip redirects
Default gateway is not set
Host Gateway Last Use Total Uses Interface
ICMP redirect cache is empty
Did a show ip route on the MPLS_WAN_Router and all routes pointed to the MPLS PE router as expected, there were no abnormal routes.
Then did a clear ip redirect on the remote_site_switch which cleared that stuck route and the Solar Winds server could ping the switch at 10.x.x.3 and the switch could ping the Solar Winds server at 10.x.x.113.
remote_site_switch#clear ip redirect
remote_site_switch#show ip route
Default gateway is 10.7.40.1
Host Gateway Last Use Total Uses Interface
ICMP redirect cache is empty
When I look at the logs on the MPLS WAN router, I see the MPLS link went down for about an hour, so the 0.0.0.0 0.0.0.0 10.x.x.2 100 route became the better route and traffic started to flow out the ASA VPN backup link. When the MPLS link recovered, traffic starting flowing out the MPLS link and the VPN tunnel was no longer the preferred route, but for some reason, the switch kept a route only to the Solar Winds server pointing to the ASA at 10.x.x.2.
Doing the clear ip redirect, resolved the stuck route issue.
Hi Adrian,
Useful post, did you try to simulate this situatuation once again and see any other host IP's in the redirect cache?
Regards
Umesh Shetty
Umesh,
We've had one unplanned and one planned failover to the backup ASA link. After the primary MPLS link was restored, no stuck route issues reappeared.
Regards,
Adrian Martinez
Thanx Brother.
Issue resolved.....
Regards,
Syed Ali Shamshad
I just spent a couple of hours troubleshooting this same issue after moving a route from the Internet to an MPLS link. Made no sense why this would get stuck with no indication of really coming from anywhere. For me this was happening on multiple 3560 switches, not sure what you versions were. Thanks for your post as this cleared up my issue. I was actually about to just bounce the switch.
Thanks
Raul
Excellent information share - thanks very much.
Exactly the same issue including link from Solarwinds. No idea why this happened but resolved the issue.
I was having the same problem Adrian.
I applied the "clear ip redirect" and the switch erased the routes
Thanks!
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: