I have begun moving NTP from our 6500 to 4 Nexus 5k as part of a core upgrade. The Nexus will act as our internal NTP server for all switches. Any switches that are on the same vlan as the Nexus have no issues syncing NTP from them. However any switch that has to have the traffice routed to the Nexus is showing that the time source as insane.
The configuration on our Nexus is as follows the Nexus are .11,12,13 and 14:
ntp peer 172.24.1.12
ntp peer 172.24.1.13
ntp peer 172.24.1.14
ntp server 22.214.171.124
clock timezone CST -6 0
clock summer-time CDT 2 Sun Mar 2:00 1 Sun Nov 2:00 60
Here is the configuration on one of our 3560's:
clock timezone CST -6
clock summer-time CDT recurring
ntp server 172.24.1.11
ntp server 172.24.1.13
ntp server 172.24.1.12
ntp server 172.24.1.14
This same configuration worked when the switches were configured as NTP Peers to our 6500 (172.24.1.1). The ip for the 6500 has been moved to an HSRP address across the Nexus so I have pointed the switches at the individual IP for each Nexus.
Here is a debug ntp packet ouput from one of the 3560s:
.Mar 7 17:21:22: NTP: xmit packet to 172.24.1.11:
.Mar 7 17:21:22: leap 3, mode 3, version 3, stratum 0, ppoll 64
.Mar 7 17:21:22: rtdel 2445 (141.678), rtdsp C804D (12501.175), refid AC180101
.Mar 7 17:21:22: ref D2F4A4F5.9CBFA919 (06:32:53.612 CST6 Sun Feb 26 2012)
.Mar 7 17:21:22: org 00000000.00000000 (18:00:00.000 CST6 Thu Dec 31 1899)
.Mar 7 17:21:22: rec 00000000.00000000 (18:00:00.000 CST6 Thu Dec 31 1899)
.Mar 7 17:21:22: xmt D3021792.8D0B8963 (11:21:22.550 CST6 Wed Mar 7 2012)
Our Mgmt interface is not routed as we are using it for a VPC heartbeat. I did try ntp source-interface vlan 1 and that did not fix the issue. The odd thing I see in the debug is:
.Mar 7 17:21:22: rtdel 2445 (141.678), rtdsp C804D (12501.175), refid AC180101 (172.24.1.1)
172.24.1.1 is the HSRP address for vlan 1. I have ntp pointing at 172.24.1.11 which is the vlan 1 interface ip on NexusA. Just as a trial I did try changing the ntp server command to point to 172.24.1.1 and that made no difference.
I did finally get this working partially. Our MetroE cloud is layer 2. NexusA has an interace on it of 192.168.25.1. On the MetroE end I can set I set the ntp source to the ethernet interface that has the 192.168.25. ip on it and set the ntp server 192.168.25.1 it will work.
That fixes the 3560 at each site. However some sites have 2900's or 2950's off of the 3560. I cannot source their ntp to come from an ip on the subnet. I guess I could have them pull from the 3560 there. It's not ideal but it would work. Is this the intended behavior/setup?
I believe what we ended up deciding was that the Nexus 5k's do not have a hardware clock so they could not act as a master NTP server. We ended up getting a couple of Nexus 7k's due to another project and using them as the master and peering them solved the issue. The 5k's just source time from the 7k's now.
Thanks for your reply.
My issue may be a little different than you encountered. In my configuration I am able to get some, but not all, SVIs on Nexus 5548s to funciton as NTP servers.
I have two Nexus 5548 vPC peers configured (N5K-1 and N5K-2) for HSRP and as NTP servers. A downstream 2960S switch stack (STK-7) is the NTP client. STK-7 is connected to N5K-1 and N5K-2 with a physical link each bundled into a port channel (multi-chassis Etherchannel on the STK-7 stack and vPC on the 5548 peers).
When the STK-7 NTP client is configure for NTP server IP addresses on the same network as the switch stack (10.3.0.0 in the diagram below) all possible IP addresses work (IP addresses in green), the “real” IP addresses of each SVI on the 5548s (10.3.0.111 & 10.3.0.112) as well as the HSRP IP address (10.3.0.1).
When the STK-7 NTP client is configured for NTP server IP addresses on a different network than the switch stack (10.10.0.0 in the diagram below) only the “real” IP address of the SVI on the 5548 to which the Etherchannel load-balancing mechanism directs the client to server NTP traffic (N5K-2) works. In the diagram above the client to server NTP traffic is sent on the link to N5K-2. In the diagram below NTP server 10.10.0.112 is reported as sane but NTP servers 10.10.0.111 and 10.10.0.1 are reported as insane (in red).
I am concerned that the issue is related to my vPC configuration.
Cisco TAC has indicated that this behavior is normal.