cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
258
Views
0
Helpful
2
Replies
Highlighted
Enthusiast

Large # routes in table after 6500>7k core migration

Hi everyone,

We recently migrated our L3 6500 core switches (2x at data center A, 2x at DC-B) and they are interconnected with DWDM links

All Layer3, no layer2.  The switch to nexus 7ks was very smooth as expected.  For some reason we are now seeing 6x the amount of routes in the core routing tables in NXOS.  We are unsure if this is something related to how NXOS summarizes or treats blocks of routes.  Basically we went from 662 routes to almost 4000 routes.  We have checked route filtering and everything seems to be in place (we use distribute-list under interfaces via EIGRP and it calls a route-map which uses prefix lists to deny/allow).  Once thing that really sticks out is the fact that NXOS now sees over 900 /32 host routes... this seems very strange to us.   Appreciate any feedback or opinions

Here is the BEFORE show ip route summary (on the 6500 cores):

Route Source      Networks    Subnets     Overhead    Memory (bytes)
connected       0           15          1080        2160

static          1           2           216         432

eigrp   90        343         3353        281088      552664
internal        318                                 695784

Total           662        3370        282384      1251040

Here is the sh ip route summary AFTER (on 7k nxos):

IP Route Table for VRF "default"

Total number of routes: 3756

Total number of paths:  3970

Best paths per protocol:      Backup paths per protocol:

  am             : 12           static         : 1

  local          : 14           eigrp-90       : 2

  direct         : 14

  static         : 3

  broadcast      : 27

  eigrp-90       : 3897

Number of routes per mask-length:

  /0 : 1       /8 : 1       /15: 1       /16: 40      /17: 2

  /18: 1       /19: 1       /20: 30      /21: 33      /22: 94

  /23: 276     /24: 600     /25: 264     /26: 310     /27: 399

  /28: 131     /29: 84      /30: 574     /31: 3       /32: 911

CCIE RS 34827       

CCIE RS 34827
1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted
Engager

Hi Dan,

I think you may have misread the number of routes from the Catalyst 6500. The number I think you should be comparing to the Nexus 7000 is the number from the "Subnets" column where you have 3370 routes in the Catalyst switches.

Regards

View solution in original post

2 REPLIES 2
Highlighted
Engager

Hi Dan,

I think you may have misread the number of routes from the Catalyst 6500. The number I think you should be comparing to the Nexus 7000 is the number from the "Subnets" column where you have 3370 routes in the Catalyst switches.

Regards

View solution in original post

Highlighted

Ah.... OK that makes things a little closer. So we are only seeing potentially a few hundred more routes.  This did push a few of our 3750 L3 switches over their route table limit until we updated the SDM template to allocate more memory for L3 resources. 

To see a few hundred more routes seems like quite a bit as well... I wonder if it's enough for concern or to keep searching why the increase?

I'm going to check filtering again and check routes incoming on every interface to see if I can determine why the increase.

thanks for the quick reply.

Dan

CCIE RS 34827

CCIE RS 34827
Content for Community-Ad