cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1566
Views
0
Helpful
14
Replies

NTP multicast/broadcast leaking on peering LAN (odd)

Getting a strange behaviour on one of our devices which is leaking NTP packets to multicast ntp (224.0.1.1) onto the AMS-IX peering LAN.

This is strange behaviour because:

1. ntp is disabled on the interface

2. egress ACLs on the interface specifically 'deny' all multicast sources or destinations (or both)

3. the device is not an NTP master

As part of debugging, I added to the egress ACL to specifically deny anything with a destination of 224.0.1.1 as well to see if it's matched in the ACL match count... no increments at all.

Created a packet-debug ACL to match specifically destination of 224.0.1.1 and applied it to a deb ip packet detail, and 'lo and behold, the traffic is definitely still being sent, bypassing the fact that ntp is explicitly disabled on the interface, and the packet filter applied on the egress of the interface.

Feb 24 08:53:20.392 CET: IP: s=X.X.X.X (local), d=224.0.1.1 (GigabitEthernet2/7), len 76, sending broad/multicast
Feb 24 08:53:20.392 CET:     UDP src=0, dst=123
Feb 24 08:53:20.392 CET: IP: s=X.X.X.X (local), d=224.0.1.1 (GigabitEthernet2/7), len 76, output feature
Feb 24 08:53:20.392 CET:     UDP src=0, dst=123, Access List(51), rtype 0, forus FALSE, sendself FALSE, mtu 0
Feb 24 08:53:20.392 CET: IP: s=X.X.X.X (local), d=224.0.1.1 (GigabitEthernet2/7), len 76, post-encap feature
Feb 24 08:53:20.392 CET:     UDP src=0, dst=123, MTU Processing(4), rtype 0, forus FALSE, sendself FALSE, mtu 0
Feb 24 08:53:20.392 CET: IP: s=X.X.X.X (local), d=224.0.1.1 (GigabitEthernet2/7), len 76, post-encap feature
Feb 24 08:53:20.392 CET:     UDP src=0, dst=123, IP Protocol Output Counter(5), rtype 0, forus FALSE, sendself FALSE, mtu 0
Feb 24 08:53:20.392 CET: IP: s=X.X.X.X (local), d=224.0.1.1 (GigabitEthernet2/7), len 76, post-encap feature
Feb 24 08:53:20.392 CET:     UDP src=0, dst=123, IP Sendself Check(8), rtype 0, forus FALSE, sendself FALSE, mtu 0
Feb 24 08:53:20.392 CET: IP: s=X.X.X.X (local), d=224.0.1.1 (GigabitEthernet2/7), len 76, post-encap feature
Feb 24 08:53:20.392 CET:     UDP src=0, dst=123, HW Shortcut Installation(15), rtype 0, forus FALSE, sendself FALSE, mtu 0
Feb 24 08:53:20.392 CET: IP: s=X.X.X.X (local), d=224.0.1.1 (GigabitEthernet2/7), len 76, sending full packet
Feb 24 08:53:20.392 CET:     UDP src=0, dst=123

What is even stranger is that when I globally disable NTP on the device, the traffic is STILL being sent.

I note also that this traffic is ONLY being sent on this interface (none of the other interfaces).

The other curious aspect is the UDP source port of 0 sourced by the local interface.

I have specifically disabled almost all "chattery" layer2 and multicast protocols on the interface as part of the standard peering-LAN configuration.  (our other peering locations don't seem to have this problem, and all are running pretty much identical hardware and IOS configuration/version).

Anyone happen to have seen anything like this before, or should I resort to Newtonian Maintenance (from an 8th floor window) ? 

Thanks

L.

14 Replies 14

Peter Paluch
Cisco Employee
Cisco Employee

Hello,

Please wait with the Newtonian Maintenance a bit longer

In any case, what is the device in question? Can you also post the configuration of the respective interface (or, if it is a Layer2 switchport, the associated Layer3 SVI) and the output of the commands show ntp associations detail and show ntp status please?

If the multicasts are locally generated, the egress ACL will not prevent them from leaking out the interface. ACLs applied to interfaces in the out direction do not apply to locally originated traffic.

Also, do the NTP multicasts actually contain sensible data, or are they malformed? Can you verify that using a packet analyzer, e.g. Wireshark?

Lots of questions - I apologize, but I am trying to collect as much relevant information as possible.

Best regards,

Peter

Hi Peter,

The device in question is a 6500 with 720 3BXL and 12.2(33)SXI2a (same as on our sister edge devices and the distribution VSS1440s).

Interface config (sanitised) -- standard edge peering interface:

interface GigabitEthernet2/7
description AMSIX xxxxxxxxxxxxx
ip address 195.69.XXX.XXX 255.255.252.0
ip access-group peer_in in
ip access-group peer_out out
no ip redirects
no ip proxy-arp
ip flow ingress
ipv6 address 2001:7F8:1::A5XX:XXXX:X/64
ipv6 traffic-filter DENY-BETISES in
  ipv6 nd ra suppress
no ipv6 mld router
ntp disable
mls netflow sampling
no cdp enable
no mop enabled
no lldp transmit
end

the peer_out ACL is a standard ACL to trap RFC1918 and Multicast as either source or destination.  I added to the top of the list specifically to stop destination of 224.0.1.1 (by address only) and a second to block combination of address and protocol (UDP with dest of 224.0.1.1 and dest port of 123), but of course these two are showing no matches, probably because as you said they are being locally generated.

I am unable to do any packet analysis on the egress as it is a remote circuit and carrying live traffic.

Regards,

L.

forgot the ntp output as you requested... (I had to re-enable NTP for the sync across our network anyway).

core2-d#sh ntp status
Clock is synchronized, stratum 2, reference is 192.93.2.20
nominal freq is 250.0000 Hz, actual freq is 249.9987 Hz, precision is 2**18
reference time is D110BAF7.7C06430A (12:09:43.484 CET Thu Feb 24 2011)
clock offset is 1.0921 msec, root delay is 2.62 msec
root dispersion is 11.25 msec, peer dispersion is 0.14 msec
core2-d#sh ntp assoc detail
173.246.96.1 configured, candidate, sane, valid, stratum 2
ref ID 192.93.2.20, time D110BAB6.8BD95A5B (12:08:38.546 CET Thu Feb 24 2011)
our mode client, peer mode server, our poll intvl 64, peer poll intvl 64
root delay 85.59 msec, root disp 11.28, reach 377, sync dist 97.778
delay 85.17 msec, offset -0.0664 msec, dispersion 0.79
precision 2**18, version 3
org time D110BAF8.863DCF64 (12:09:44.524 CET Thu Feb 24 2011)
rcv time D110BAF8.91296BE6 (12:09:44.567 CET Thu Feb 24 2011)
xmt time D110BAF8.7B59355F (12:09:44.481 CET Thu Feb 24 2011)
filtdelay =    85.17   85.13   85.13   85.25   85.42  276.03   85.49   85.34
filtoffset =   -0.07   -0.03   -0.05   -0.04   -0.06   95.40   -0.04    0.04
filterror =     0.02    0.58    1.56    2.29    2.88    3.75    4.73    5.71

195.66.241.3 configured, selected, sane, valid, stratum 1
ref ID .PPS., time D110BB0D.CFC2830E (12:10:05.811 CET Thu Feb 24 2011)
our mode client, peer mode server, our poll intvl 64, peer poll intvl 64
root delay 0.00 msec, root disp 0.23, reach 377, sync dist 6.134
delay 9.57 msec, offset 0.8636 msec, dispersion 1.13
precision 2**20, version 3
org time D110BB0E.7CDA48C7 (12:10:06.487 CET Thu Feb 24 2011)
rcv time D110BB0E.7DDB6222 (12:10:06.491 CET Thu Feb 24 2011)
xmt time D110BB0E.7B662558 (12:10:06.482 CET Thu Feb 24 2011)
filtdelay =     9.57   10.07    9.25    9.55    9.63  226.96    9.28   28.96
filtoffset =    0.86    1.01    0.96    0.79    0.93 -107.92    0.89   10.75
filterror =     0.02    0.99    1.97    2.94    3.92    4.90    5.87    6.85

192.93.2.20 configured, our_master, sane, valid, stratum 1
ref ID .GPSi., time D110BAE4.00024418 (12:09:24.000 CET Thu Feb 24 2011)
our mode client, peer mode server, our poll intvl 64, peer poll intvl 64
root delay 0.00 msec, root disp 1.25, reach 377, sync dist 3.052
delay 2.62 msec, offset 1.0921 msec, dispersion 0.14
precision 2**20, version 3
org time D110BAF7.7BF7BBF5 (12:09:43.484 CET Thu Feb 24 2011)
rcv time D110BAF7.7C06430A (12:09:43.484 CET Thu Feb 24 2011)
xmt time D110BAF7.7B5931D5 (12:09:43.481 CET Thu Feb 24 2011)
filtdelay =     2.62    2.38    3.02    2.70    2.18    2.67    2.11    2.79
filtoffset =    1.09    1.16    0.78    1.03    1.03    1.16    1.10    1.00
filterror =     0.02    0.99    1.97    2.94    3.92    4.90    5.87    6.85

195.66.241.10 configured, selected, sane, valid, stratum 1
ref ID .GPS., time D110BB01.0964D5A6 (12:09:53.036 CET Thu Feb 24 2011)
our mode client, peer mode server, our poll intvl 64, peer poll intvl 64
root delay 0.00 msec, root disp 0.32, reach 377, sync dist 5.478
delay 9.84 msec, offset 0.7904 msec, dispersion 0.12
precision 2**19, version 3
org time D110BB06.7CDD8D62 (12:09:58.487 CET Thu Feb 24 2011)
rcv time D110BB06.7DEC51C9 (12:09:58.491 CET Thu Feb 24 2011)
xmt time D110BB06.7B62CD06 (12:09:58.481 CET Thu Feb 24 2011)
filtdelay =     9.84    9.90    9.98   10.12    9.78   10.30    9.64    9.54
filtoffset =    0.79    0.86    0.70    1.07    0.91    1.12    0.96    0.88
filterror =     0.02    0.99    1.97    2.94    3.92    4.90    5.87    6.85

core2-d#

Hello,

Thank you for your outputs. Hmmm, I see no obvious issue in what you have posted so far. Intriguing.

Do you believe you could configure a SPAN session to monitor the egress port, and thus get an access to the NTP multicasts?

Best regards,

Peter

Hello,

One new thought. The ntp disable command actually only disabled processing of received NTP packets on an interface but it does not seem to influence sending of such packets (although sending them should be off by default as well).

I wonder: if you configure the ntp multicast 239.1.2.3 on the interface, do the multicasts get re-addresses to the multicast group 239.1.2.3 instead of being sent to the default IP address 224.0.1.1? And after that, when you issue the command no ntp multicast - do the multicasts stop, or do they revert back to the 224.0.1.1?

In any case, the source port of 0 actually suggests that there is something wrong going on, and it suggests to be an IOS bug.

Best regards,

Peter

Will try that first before doing further packet analysis using a SPAN port (mainly because the device is remote and it would mean sending an engineer on site to deal with it... )

Will update with my findings.

welp.. here goes.

had to enable ntp on the interface in order to set the ntp multicast group manually, so in software it did in fact believe that it was disabled.

After setting the multicast group to 239.1.2.3, these packets were indeed redirected to that group address, however there is a difference:

Feb 24 14:10:54.537 CET: IP: s=X.X.X.X (local), d=239.1.2.3 (GigabitEthernet2/7), len 76, sending broad/multicast
Feb 24 14:10:54.537 CET:     UDP src=123, dst=123
Feb 24 14:10:54.537 CET: IP: s=X.X.X.X (local), d=239.1.2.3 (GigabitEthernet2/7), len 76, output feature
Feb 24 14:10:54.537 CET:     UDP src=123, dst=123, Access List(51), rtype 0, forus FALSE, sendself FALSE, mtu 0
Feb 24 14:10:54.537 CET: IP: s=X.X.X.X (local), d=239.1.2.3 (GigabitEthernet2/7), len 76, post-encap feature
Feb 24 14:10:54.537 CET:     UDP src=123, dst=123, MTU Processing(4), rtype 0, forus FALSE, sendself TRUE, mtu 0
Feb 24 14:10:54.537 CET: IP: s=X.X.X.X (local), d=239.1.2.3 (GigabitEthernet2/7), len 76, post-encap feature
Feb 24 14:10:54.537 CET:     UDP src=123, dst=123, IP Protocol Output Counter(5), rtype 0, forus FALSE, sendself TRUE, mtu 0
Feb 24 14:10:54.537 CET: IP: s=X.X.X.X (GigabitEthernet2/7), d=239.1.2.3, len 76, input feature
Feb 24 14:10:54.537 CET:     UDP src=123, dst=123, Ingress-NetFlow(13), rtype 0, forus FALSE, sendself FALSE, mtu 0
Feb 24 14:10:54.537 CET: IP: s=X.X.X.X (GigabitEthernet2/7), d=239.1.2.3, len 76, access denied
Feb 24 14:10:54.537 CET:     UDP src=123, dst=123

Most notably, the source port is now 123, but also the last entry in the packet debug is "access denied", and I suspect that it was therefore not actually transmitted down the wire.  (I will contact the AMS-IX shortly to do a test).

Next step, removing it:  I could not simply "no ntp multicast" as the command itself defaults to 224.0.1.1 as the group, and software threw an error saying that the multicast peer 224.0.1.1 does not exist:

core2-d(config-if)#no ntp multic
%NTP: Multicast peer 224.0.1.1 does not exist
core2-d(config-if)#no ntp multic 239.1.2.3
core2-d(config-if)#

After issuing this, and debugging again, the packets were still forwarded, and reverted back to 224.0.1.1 with yet again the source port of 0.

Feb 24 14:12:16.534 CET: IP: s=X.X.X.X (local), d=224.0.1.1 (GigabitEthernet2/7), len 76, sending full packet
Feb 24 14:12:16.534 CET:     UDP src=0, dst=123

Next, returning to the interface and re-disabling NTP entirely on the interface, the packets are still sent.

core2-d(config)#int gi2/7
core2-d(config-if)#ntp disa
core2-d(config-if)#end
core2-d#
Feb 24 14:12:33.234 CET: %SYS-5-CONFIG_I: Configured from console by leland on vty1 (217.70.181.2)
core2-d#deb ip packet 199 detail
IP packet debugging is on (detailed) for access list 199
core2-d#

{{SNIPPED FOR BREVITY}}

Feb 24 14:13:20.534 CET: IP: s=X.X.X.X (local), d=224.0.1.1 (GigabitEthernet2/7), len 76, sending full packet
Feb 24 14:13:20.534 CET:     UDP src=0, dst=123

I will contact the AMS-IX NOC to perform some tests, as this might actually be a potential work-around at least.  However, I would agree that it smells of an IOS bug.

Regards,

Leland

erm.. well.. so much for that idea...

actually with the redirect to 239.1.2.3 in place, it sends to BOTH 224.0.1.1 and 239.1.2.3  (with 239.1.2.3 flagged as "access denied").  224.0.1.1 is still sourced from port 0.

Back to the drawing board, but now it reeks of a bug rather than simply smelling like one

Hello Leland,

Thank you for performing such a detailed analysis.

Yes, this definitely stinks My personal guess is that this is an IOS bug - that would be confirmed if the issue disappeared after rebooting the box with the current configuration - though I believe that rebooting the device is not an option currently. Nevertheless, the erratic behavior of the source port and the actual inability to disable the NTP suggests strongly that this behavior is not correct.

Do you perhaps have a TAC contract? If yes, you could raise a ticket directly from this thread.

Best regards,

Peter

I will schedule a reload of the device and see what happens, in the first instance, failing which I will raise the issue.

Thanks for your assistance.

Leland

Hello Leland,

You are welcome. Please keep me posted about the evolution of this issue - I would be very interested.

Best regards,

Peter

well... the nice folks at AMS-IX confirmed that they are still seeing the traffic, for BOTH multicast groups when attempting the "work-around".

Interestingly, they have now said that the router has today also started sending out IPv6 multicast listener reports... so I think this device is smoking some wacky weed or something -- (perhaps a side effect of the fact that it's connected to Amsterdam... )

Reload scheduled for later today, so we'll see what that does.

L.

Reload completed and ip packet detail debugging now showing no sign of this offending traffic.

So... definitely software related -- awaiting confirmation from ams-ix that they no longer see the offending traffic as well.

L.

Well.. this issue is now cropping up on a few other routers, and it appears to be starting just "all of a sudden".  Yet again all attempts to suppress or even totally disable altogether have no affect.

-- wandering off to kick the TAC again...

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco