cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6173
Views
3
Helpful
21
Replies

AV devices using mDNS not working between switches in an SDA Fabric

Hello,

We have a Cisco SDA fabric with 9300 edge switches and a single 9500 border node, with DNAC version 2.3.3.6-70045. The fabric in question contains a VN with native ASM multicast enabled in the overlay and an IP pool with Layer 2 flooding turned off. The underlay network has been provisioned with LAN automation and has multicast enabled.

Our AV team have been working with us to test some Dante audio devices that utilise mDNS for a PC connected to the same IP pool to discover audio devices on the network. We have found that, with an IP pool with layer 2 flooding disabled, the discovery only works with devices on the same switch stack. However, with layer 2 flooding enabled, the discovery works. Packet capture suggests that the mDNS traffic does not make it between the switches. It also suggests that the audio receivers are not sending IGMP joins for the mDNS group (224.0.0.251) - also does not appear in a 'show ip igmp snooping groups' on the switches. We inititally thought it was a TTL problem in the multicast packets, causing it to be droppped at the first hop. However, packet captures suggest that mDNS is being sent out with a TTL of 255.

Does anyone have any experience with mDNS on SDA fabrics? I'm not too familiar with the protocol, so is it the same as other multicast traffic or are there specific things that need to be configured for this to function?

 

Thanks,

James

21 Replies 21

Hi, Did you get anywhere with this? Seeing similar things with separate master clocks. Audio won't pass unless device is first connected to one of the floors and them moved back to the second network weirdly.

From a design perspective no.

The workaround we went with was a Dante Domain Manager which gets rid of the election process and allows you to set a Master Clock.

This is now our AV design for any sites that require a Master Clock outside of an FiaB solution

Thanks for the update, are you using PTPv2 with DDM?

From what I remember, some devices were using PtPv1 and some were using PtPv2, not sure how the DDM was configured.

I believe it was the PtPv1 that was causing issues because of the TTL=1 and in the Fabric it just killed it at the gateway.

Hi All !
We resolved by applying a workaround as suggested by Cisco TAC:
On each edges involved in Dante:

ip access-list extended DENY_PTP
deny ip any host 224.0.1.129
deny ip any host 224.0.1.130
deny ip any host 224.0.1.131
deny ip any host 224.0.1.132
permit ip any any

ip multicast vrf XXXXXXXX group-range DENY_PTP

I understand that the problem with multicast and ttl=1 (like ptpv1) should be fixed in next releases.
But meanwhile this workaround is working: only one master clock also between different edges.

We've got another office reporting the multiple master clock issue but this is a single stack (FiaB).

I'm going to try applying the above before they buy a machine for DDM and licensing.

Thanks for the update Stefano7k

Ignore the above, issues were being caused by qSYS Core firmware. Issue is still only seen in multiple stacks