03-02-2010 01:59 PM - edited 03-06-2019 09:57 AM
I have a couple of core 6509s that link to a couple of distibution 6509s with individual L3 /30 gig ethernet connections.
Core-A connects to Dist-A and Dist-B, and Core-B connects to Dist-A and Dist-B -- typical redundant Cisco Core/Distribution/Access architecture.
The /30 L3 gig uplinks are being over-utilized at times causing packet drops. So I need more uplink bandwidth.
I can easily add another L3 /30 gig connection from each Core-6509 to each Dist-6509, or I can just as easily add the connection, but make them into L3 /30 etherchannels. What would give me the best overall throughput? We're running EIGRP. Is there any advantage one way or the other?
Solved! Go to Solution.
03-02-2010 02:07 PM
jkeeffe wrote:
I have a couple of core 6509s that link to a couple of distibution 6509s with individual L3 /30 gig ethernet connections.
Core-A connects to Dist-A and Dist-B, and Core-B connects to Dist-A and Dist-B -- typical redundant Cisco Core/Distribution/Access architecture.
The /30 L3 gig uplinks are being over-utilized at times causing packet drops. So I need more uplink bandwidth.
I can easily add another L3 /30 gig connection from each Core-6509 to each Dist-6509, or I can just as easily add the connection, but make them into L3 /30 etherchannels. What would give me the best overall throughput? We're running EIGRP. Is there any advantage one way or the other?
If you are seeing equal cost paths on the 2 uplinks in the routing table then using a L3 etherchannel won't give you anymore throughput as far as i can see. So it's really just a question of which one you prefer to use really.
That is as long as you are seeing equal cost paths presently.
Jon
03-02-2010 02:10 PM
Hello Jkeeffe,
adding new uplinks as separated L3 point to point to interfaces has the advantage that you are not touching existing links
About load balancing capabilities :
etherchannel uses an exor of IP SA and IP DA
CEF flow based uses an exor of IP SA, IP DA and an hash seed that changes only at next reload
both are flow based and not per packet.
I would say that you can expect similar results with a difference:
separate links have their own EIGRP hello messages and potentially their own BFD messages
when using a L3 etherchannel EIGRP messages can travel on single member link both sides ( it may happen they should be treated as user traffic) so unless you use LACP or PAGP you cannot detect a single link failure in short time.
Said this, you will find several colleagues that prefer L3 etherchannel over two L3 parallel links.
So it is more a question of choice.
Hope to help
Giuseppe
03-02-2010 02:07 PM
jkeeffe wrote:
I have a couple of core 6509s that link to a couple of distibution 6509s with individual L3 /30 gig ethernet connections.
Core-A connects to Dist-A and Dist-B, and Core-B connects to Dist-A and Dist-B -- typical redundant Cisco Core/Distribution/Access architecture.
The /30 L3 gig uplinks are being over-utilized at times causing packet drops. So I need more uplink bandwidth.
I can easily add another L3 /30 gig connection from each Core-6509 to each Dist-6509, or I can just as easily add the connection, but make them into L3 /30 etherchannels. What would give me the best overall throughput? We're running EIGRP. Is there any advantage one way or the other?
If you are seeing equal cost paths on the 2 uplinks in the routing table then using a L3 etherchannel won't give you anymore throughput as far as i can see. So it's really just a question of which one you prefer to use really.
That is as long as you are seeing equal cost paths presently.
Jon
03-02-2010 02:10 PM
Hello Jkeeffe,
adding new uplinks as separated L3 point to point to interfaces has the advantage that you are not touching existing links
About load balancing capabilities :
etherchannel uses an exor of IP SA and IP DA
CEF flow based uses an exor of IP SA, IP DA and an hash seed that changes only at next reload
both are flow based and not per packet.
I would say that you can expect similar results with a difference:
separate links have their own EIGRP hello messages and potentially their own BFD messages
when using a L3 etherchannel EIGRP messages can travel on single member link both sides ( it may happen they should be treated as user traffic) so unless you use LACP or PAGP you cannot detect a single link failure in short time.
Said this, you will find several colleagues that prefer L3 etherchannel over two L3 parallel links.
So it is more a question of choice.
Hope to help
Giuseppe
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide