Has anyone tried to connect a large number of hosts and virtual machines to nexus 9k, most specifically the EX and FX series?
The scalability guide shows a very small amount of adjacency (32k or 48k shared between ipv4 ipv6 etc).
I'm looking to see if anyone has had over 100k ARP or 100k ipv6 ND entries on one of these devices.
Example having a few hundred customers, each with their own SVI , and about 500-1000 adjacencies per SVI.
The Ciscolive docs suggest that the flex tiles on this hold the adjacency info so it would seem likely to support it but I have not yet tested it.
If anyone has lab tested or real world tested this I'd like to hear it.
As an example, I have tested 6509 (sup720) with over 100k ARP/ND entries and it handles it, although the CPU gets pretty unhappy with the churn since the cpu is rather slow--it has a 1million adjacency table though, and none of the nexus documentation or papers or live presentations mention total maximum adjacencies.
I have also tested this on the 4500 platform and some others where the maximum seems to be 48k no matter what you do.
As far as I can see on my -FX model, the IP adj limit is:
N9K# show ip adjacency summary IP AM Table - Adjacency Summary SDB Limit : 174080
However, I would still not go over the scalability limits for ARP/ND, because there are so many thing which could go wrong. For example: COOP kicking in and dropping ARPs/NDs and affecting all your customers, hardware programming issues and many others.
Yes I notice that too and it's configurable :
<1-614400> Maximum number of ARP entries
*Default value is 174080
But that doesn't mean the hardware can do it. The maximum I've seen on the scalability guide is 64,000 and it doesn't mention if it's shared ipv4/ipv6 or if it can do 64k of both at the same time. The hardware (especially ex/fx) suggests it is certainly possible to go over this number possibly into the 200k each range.
I was really wondering if anyone has actually tried to go over this number and see what happens? I will eventually test it myself but I figured I'd reach out to the community also.