cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
10025
Views
25
Helpful
12
Replies

Nexus 7K F2e module mac address limits

eagles-nest
Level 1
Level 1

Hi

In the Cisco docs it says that the F2e card supports 16k MAC addresses per SoC and there are 12 per linecard implying a 192K MAC address limit PER linecard.

When I read further it says that MAC address tables are synchronized across ports in the same VDC.  Does that mean that if I only use 1 VDC then the limit of MAC addresses across all linecards is 16K in total or as I have read somewhere else does limiting the VLANs on each port also reduce the MAC addresses held on each card?

So if on card 1 I have a vPC carrying VLANs 10, 20 and 30.  On another port I have vlans 10, 40 and 50 and on a further linecard port I have VLANs 60, 70 and 80 what MAC addresses are where ?

Are all MAC addresses from VLANs 10 - 80 on all cards or do the relevant card and it's SoC only hold the MAC addresses for the VLANs that exist on that cards trunks ?

Would the second SoC on the 1st linecard see VLAN 10, 20 and 30 MAC addresses as well as 40 and 50 ?

Would the second linecard see a copy of MAC addresses in vlans 10 to 50 if they only get trunked on ports on the 1st linecard?

So in short are all MACs copied to all SoC's in the same VDC or only to those ones with shared VLANs.

Many thanks for any replies.

St.

1 Accepted Solution

Accepted Solutions

Hi Stuart,

Your summary is correct. The trick is to group your VLANs to the same 4 port group which is served by the same SOC.

With an empty line card this should be easy with good planning, but those 48 ports will soon fill up!

The command to check your SOC ulitization per linecard:

attach module x

show hardware internal forwarding l2 table utilization instance all

cheers,

Seb.

View solution in original post

12 Replies 12

Seb Rupik
VIP Alumni
VIP Alumni

Hi there,

We have hit the 16K MAC limit in a production campus environment; the Cisco docs and TAC have been pretty poor at discribing why!

From what we have seen when an F2 only VDC is configured to operate at L3 and has SVIs present the MAC address limit is at its worst.

Each sequential 4 ports (port-group) share the same SOC. Any MACs which appear on a VLAN are stored on the respective SOC MAC table. The Nexus implements MAC-sync, so if a VLAN appears on two different SOCs they will sync their MAC table entires for that VLAN. The worst case scenario is on vPC links which trunk multiple VLANs where the 16K limit is quickly hit.

There are two methods to mitigate this limitation.

  1. Implement the F2 VDC as layer 2 aggregation only (this is what it was designed in the DC for). Use vPC links to an M-series VDC which can handle the Later 3 routing. Finally configure 'conversational mac learning', this will further reduce the impact of MAC-sync.
  2. F2e's can be combined with a M-series cards. This is a good position, as when combined with a M card the F2e operates at Layer2 only and the resources on the M card (MAC tables) are used by it; 128K. This also has the benefit of using the FAB backplane between the linecards as opposed to the vPC link in the above solution which is a potential bottleneck (160Gbps across two Nexus chassis - 8 x 10Gb fibres per chassis).

In short, the Nexus is not a great campus solution!

cheers,

Seb.

Many thanks Seb

You say "The Nexus implements MAC-sync, so if a VLAN appears on two different SOCs they will sync their MAC table entires for that VLAN."

Are you saying that MACs are only sync'd between SoCs that share VLANs ?  So if VLANs are distributed around the SoCs and rarely appear on multiple SoCs then the replication of MACs will me minimised ?

So on line card 1 port 1 we have a trunk with VLANs A, B and C and on the same linecard on port 48 we have a trunk with VLANs A, D and E then only MACs on VLAN A are sync'd across the SoCs ?

Thanks, Stuart.

Hi Stuart,

Your summary is correct. The trick is to group your VLANs to the same 4 port group which is served by the same SOC.

With an empty line card this should be easy with good planning, but those 48 ports will soon fill up!

The command to check your SOC ulitization per linecard:

attach module x

show hardware internal forwarding l2 table utilization instance all

cheers,

Seb.

Seb

attach module x

show hardware internal forwarding l2 table utilization instance all

That command is very useful but I found when I upgraded to 6.1.4a the command is no longer valid.  I've reverted to 6.1.2 in the meantime to do some testing because I like the output format from that command.  No doubt there is a similar command that gives similar output in 6.1.4a but I've not found it yet.

What I've found is that I have a few cards with many vPC's on them.  Each vpc is a trunk with VLANs limited to very specific ranges and they don't tend to duplicate across vPC's apart from one VLAN.  So vPC X on SoC 1 may have vlan 10,20 and 30 and vPC Y on SoC 5 may have vlan 10, 40 and 50

What I have confirmed in testing is that any MAC address from VLANs 20 to 50 appears in the count for ALL SoCs on all cards.  So limiting VLANs to a SoC does not seem to limit the MAC replication.

I found something on conversational mac learning that may help the situation but I believe that requires Fabric path to operate on ethernet VLANs.

Do you have any knowledge of conversational mac learning/fabric path ?

Thanks again, Stuart.

Hi Stuart,

I finally got a response back from Cisco, and the piece of information that was most relevant was:

We  synchronize MAC addresses across all SOCs in a VDC or switch that have a  VLAN with a SVI defined, regardless if the SOC carries the VLAN or not  with one  exception of a VLAN with no SVI.

...which sort explains the numbers on the SOCs better than what I first proposed in my original post.

With regard to conversational MAC learning; this solution was suggested by the DC CCIE's at our supplier. When we tried it, our F2 was still operating at L3 so gave unexpectedly poor results.

We're currently in the process of creating an M1/2 VDC for layer3. Eventually we'll get round testing the F2 at L3 again...

Be good to hear what results you get.

cheers,

Seb.

Thanks Seb

Sorry for the late reply.  I've been on holiday.

I found similar.  Any MAC on any SoC is replicated across all cards unless we deploy VDC's which isn't really an option for me.  I hear there are F3 cards due out with significantly large MAC tables that we may look to use if the MAC limit is a problem.  Conversation MAC learning didn't help.  A very strangely low MAC table size for such a high end switch.

Stuart.

You can use this command:

show hardware capacity forwarding

You will get a lot of ouput, but this one is whatyou want:

L2 Forwarding Resources

-------------------------

  L2 entries: Module inst   total    used   mcast   ucast   lines   lines_full

------------------------------------------------------------------------------

              3         0   16384    1183      38   10145     512            0

              3         1   16384    1308      16    1292     512            0

              3         2   16384      11       0      11     512            0

              3         3   16384      11       0      11     512            0

              3         4   16384      11       0      11     512            0

              3         5   16384      11       0      11     512            0

              3         6   16384      11       0      11     512            0

              3         7   16384      11       0      11     512            0

              3         8   16384      11       0      11     512            0

              3         9   16384      61       0      61     512            0

              3        10   16384    2380      20   10360     512            0

              3        11   16384    3380      20   10360     512            0

Did you get any information how to use similar command to see mac table utilisation on 6.1.4?

Thx

Hubert

Hi Seb,

Cisco documentation with regards to classical ethernet and dynamic mac address learning is rather sparse. Most of it is limited to "you can configure it and here is how:".We're facing simailr issues and are considering implementing this too. Do you have any experience with this? Especially with regards to impact on existing VPCs etc.

cheers!

eagles-nest
Level 1
Level 1

Thanks again Web.

For the size of chassis this is it seems quite a restriction that needs carefully planned. To be honest I've never been concerned about MAC capacity on large switches before. I can't think there's been a situation when I've been close to a chassis capacity. It has been quite a surprise to find this in a very high end chassis like a 7k.

St.


Sent from Cisco Technical Support Android App

carlvarg
Cisco Employee
Cisco Employee

Next link is too useful:

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/sw/best_practices/cli_mgmt_guide/cli_mgmt_bp/hw_resources.html

 

mchaudhr1
Level 1
Level 1

we faced the same problem, we noticed that when we tried to create SVI but it wouldn't initialize, checked the log and got errors

ulticast traffic loss, disable OMF. Use the configuration CLI: "no ip igmp snooping optimise-multicast-flood"

2018 Apr 11 23:52:51 -NX7010-1 %L2MCAST-SLOT7-2-L2MCAST_MAC_FULL_LC: Failed to insert entry in MAC table for FE 5 swidx 20 (0x14) with err (mac table full). To avoid possible

 

NX7010-1 %L2FM-1-L2FM_LINE_FULL_CONDITION: Error: 0x412b002b Failed to insert MAC in slot 7 due to line full condition

2018 Apr 12 00:33:09 NX7010-1 %L2FM-1-L2FM_LINE_FULL_CONDITION: Error: 0x412b002b Failed to insert MAC in slot 1 due to line full condition

2018 Apr 12 00:33:09 NX7010-1 %L2FM-1-L2FM_LINE_FULL_CONDITION: Error: 0x412b002b Failed to insert MAC in slot 8 due to line full condition

2018 Apr 12 00:33:09 NX7010-1 %L2FM-1-L2FM_LINE_FULL_CONDITION: Error: 0x412b002b Failed to insert MAC in slot 2 due to line full condition

 

what we find out was  that 25K+ MACs, over 24K of them are VMware, We picked one at random mac address, and it shows up 176 times on both agg 7k switches on all vlan.    This was caused by vm Feature called "dvs health check  feature" which was turned on .   once the feature was turned off by VMware team,   mac address table reduced back to 4000 entries from 25000 + entries.

 

This is what this feature does.

Here’s the calculation from vmware – the article is for vsphere 5.5, but seems accurate for 6.0 and 6.5 as well (we’re running 6.5):

 

  • There is some scaling limitation to the network health check. The distributed switch network health check generates one MAC address for each uplink on a distributed switch for each VLAN multiplied by the number of hosts in the distributed switch to be added to the upstream physical switch MAC table. For example, for a DVS having 2 uplinks, with 35 VLANs across 60 hosts, the calculation is 2 * 35 * 60 = 4200 MAC table entries on the upstream physical switch.

 

https://kb.vmware.com/kb/2034795

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card