cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1232
Views
0
Helpful
9
Replies

Network Best Practice

soginy
Level 1
Level 1

Hi just looking for some opinion on a subject for research purpose.  Is there any advantage of sharing load across multiple links if using a single link won't result in congestion. I have carried out test using a traffic generator and i have sent 100% load on a link and this has not resulted in any performance degradation.  so is there any network best practice that say you should not use up the capacity on a link or is there any disadvantage to this?

9 Replies 9

balaji.bandi
Hall of Fame
Hall of Fame

in short :

 

1. for resilience purpose you going to have 2 links to load balance.

2. seconds you can do traffic engineering between 2 links

3. if single link, failure scenarios high ( so prefer to have dual links, if the business not looking any down time)

 

 

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

Thank you for quick response. I understand the advantage of having multiple links. The question is, is there any advantage for like sending 100% traffic on only one link or sharing the load equally across both links 50-50?

take example :

 

if you have 1GB link for now  and if  you like to increased to need 2 GB Link ( you can have 2 x 1GB), So  you need to add another link to Load share and bundle the link to get 2GB

 

if you do not have requirement  more 1GB link, and you do not like to have resilience link, then you would not required here i guess.

 

Hope this make sense ? ( today world we look more of high availability, except end device do not support dual links - like laptop)

 

BB

***** Rate All Helpful Responses *****

How to Ask The Cisco Community for Help

If you would not exceed 100% of one link, then excluding redundancy, there's little, although some, advantage in splitting traffic 50/50.

An example of such a very slight advantage, consider you had ten 100 Mbps ingress links and a gig egress link. It's possible ingress traffic, on the different ingress links, could arrive at exactly the same time. With one link, if more than one of the ingress link's frame/packet arrives at the same time, there will be delay for all but one of the received frames/packets. With a second link, in theory, you could transmit two of received frames/packets at exactly the same time.

Thanks for the reply. I thought the characteristics of non-blocking switches was that they are able to handle all ingress traffic simultaneously. I would have thought if packets arrived same time on a non-blocking switch it had the capacity to transmit it simultaneously. 

What you're referring to, I believe, is a non-blocking switch fabric, which can, in theory, carry all the ingress traffic to all the egress ports without the need to queue any packets. For example, if a switch has four gig ports, port (ingress>egress) 1>2, 2>3, 3>4 and 4>1.

(NB: BTW, non-blocking switches also may refer to they don't do head-of-the-line blocking, but that should be true of almost any switch built this century. So, again, "non-blocking", today, generally mean the switch's fabric has the bandwidth to support all the switch's ports.)

That, though, is not what I described. I described ten 100 Mbps ingress ports going to one gig egress port. The egress port will not queue packets due to insufficient bandwidth, but it might due to the possibility of the ingress ports delivering their traffic at exactly the same time. Physically, although the gig port transmits 10x faster, it cannot transmit more than one frame at a time. It can, though, transmit ten received frames before any more arrive from the 100 Mbps ingress ports.

If this is still unclear, please let me know what is unclear and I'll try explaining in another way.

Leo Laohoo
Hall of Fame
Hall of Fame

@soginy wrote:

I have carried out test using a traffic generator and i have sent 100% load on a link and this has not resulted in any performance degradation.


If there is only one link and if that link is congested, everyone will agree/guarantee that this will impact performance 100%.  I don't know what methodology was used but I don't agree with above statement.  

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Soginy,

 

>> I have carried out test using a traffic generator and i have sent 100% load on a link and this has not resulted in any performance degradation. so is there any network best practice that say you should not use up the capacity on a link or is there any disadvantage to this?

 

Traffic generators transmit traffic at constant speed, real traffic is bursty sent by different devices.

Have you ever heard of microbursts ?

You have probably not reached exactly 100% of load on the WAN link.

However, when you go near the 100% of load  before starting to have packet loss you can experience increase in latency (delay) and in variation of delay (jitter) that can be an inpairment for some type of traffic like VOIP.

 

Also when using multiple links in parallel devices are not able to split traffic 50%/50% because they work on flows.

One flow is defined by considering IP source address and IP destination address.

Load balancing works by using the same link for packets of a single IP flow. As a result of this traffic is split between the available links but it is not an uniform distribution.

To make an example if a database replication is started between two servers the resulting flow will travel over a single link and can use more bandwidth and last for much more time then other type of traffic.

 

As explained by Balaji one advantage of multiple links to avoid that a single link fault can cause a total lack of connectivity.

 

Testing methodologies have been deployed over time

 

RFC 2544

https://tools.ietf.org/html/rfc2544

 

TCP testing

https://tools.ietf.org/html/rfc6349

 

And other more focused on metro ethernet like

https://en.wikipedia.org/wiki/Y.1564

 

Hope to help

Giuseppe

 

To clarify a point made by Giuseppe, and perhaps Leo too, you cannot have egress congestion unless the aggregation of ingress bandwidth exceeds egress bandwidth. As an example, my ten 100 Mbps Ethernet links cannot overdrive a gig egress.

In the case of two gig ingress to one gig egress, you will not have congestion if the ingress traffic is all CBR (constant bit rate) with the aggregate not exceeding the egress capacity.

What I believe both Giuseppe and Leo have in mind are "real world" networks where ingress bandwidth can exceed egress bandwidth and the traffic is VBR (variable bit rate). Even when "average" VBR bandwidth aggregate is less then the egress bandwidth, in short term, it's not uncommon for the egress interface to have "too much" traffic (i.e. a microburst). A real-world example would be all VoIP traffic using the G.711 codec (CBR) vs. VoIP traffic using the G.729 codec (VBR).

Review Cisco Networking for a $25 gift card