08-30-2019 09:44 AM
Hi just looking for some opinion on a subject for research purpose. Is there any advantage of sharing load across multiple links if using a single link won't result in congestion. I have carried out test using a traffic generator and i have sent 100% load on a link and this has not resulted in any performance degradation. so is there any network best practice that say you should not use up the capacity on a link or is there any disadvantage to this?
08-30-2019 09:48 AM
in short :
1. for resilience purpose you going to have 2 links to load balance.
2. seconds you can do traffic engineering between 2 links
3. if single link, failure scenarios high ( so prefer to have dual links, if the business not looking any down time)
08-30-2019 09:52 AM
Thank you for quick response. I understand the advantage of having multiple links. The question is, is there any advantage for like sending 100% traffic on only one link or sharing the load equally across both links 50-50?
08-30-2019 09:59 AM
take example :
if you have 1GB link for now and if you like to increased to need 2 GB Link ( you can have 2 x 1GB), So you need to add another link to Load share and bundle the link to get 2GB
if you do not have requirement more 1GB link, and you do not like to have resilience link, then you would not required here i guess.
Hope this make sense ? ( today world we look more of high availability, except end device do not support dual links - like laptop)
08-30-2019 12:52 PM
08-31-2019 12:38 AM
Thanks for the reply. I thought the characteristics of non-blocking switches was that they are able to handle all ingress traffic simultaneously. I would have thought if packets arrived same time on a non-blocking switch it had the capacity to transmit it simultaneously.
08-31-2019 10:41 AM - edited 08-31-2019 10:42 AM
What you're referring to, I believe, is a non-blocking switch fabric, which can, in theory, carry all the ingress traffic to all the egress ports without the need to queue any packets. For example, if a switch has four gig ports, port (ingress>egress) 1>2, 2>3, 3>4 and 4>1.
(NB: BTW, non-blocking switches also may refer to they don't do head-of-the-line blocking, but that should be true of almost any switch built this century. So, again, "non-blocking", today, generally mean the switch's fabric has the bandwidth to support all the switch's ports.)
That, though, is not what I described. I described ten 100 Mbps ingress ports going to one gig egress port. The egress port will not queue packets due to insufficient bandwidth, but it might due to the possibility of the ingress ports delivering their traffic at exactly the same time. Physically, although the gig port transmits 10x faster, it cannot transmit more than one frame at a time. It can, though, transmit ten received frames before any more arrive from the 100 Mbps ingress ports.
If this is still unclear, please let me know what is unclear and I'll try explaining in another way.
08-30-2019 06:07 PM - edited 08-30-2019 06:08 PM
@soginy wrote:
I have carried out test using a traffic generator and i have sent 100% load on a link and this has not resulted in any performance degradation.
If there is only one link and if that link is congested, everyone will agree/guarantee that this will impact performance 100%. I don't know what methodology was used but I don't agree with above statement.
09-01-2019 09:11 AM
Hello Soginy,
>> I have carried out test using a traffic generator and i have sent 100% load on a link and this has not resulted in any performance degradation. so is there any network best practice that say you should not use up the capacity on a link or is there any disadvantage to this?
Traffic generators transmit traffic at constant speed, real traffic is bursty sent by different devices.
Have you ever heard of microbursts ?
You have probably not reached exactly 100% of load on the WAN link.
However, when you go near the 100% of load before starting to have packet loss you can experience increase in latency (delay) and in variation of delay (jitter) that can be an inpairment for some type of traffic like VOIP.
Also when using multiple links in parallel devices are not able to split traffic 50%/50% because they work on flows.
One flow is defined by considering IP source address and IP destination address.
Load balancing works by using the same link for packets of a single IP flow. As a result of this traffic is split between the available links but it is not an uniform distribution.
To make an example if a database replication is started between two servers the resulting flow will travel over a single link and can use more bandwidth and last for much more time then other type of traffic.
As explained by Balaji one advantage of multiple links to avoid that a single link fault can cause a total lack of connectivity.
Testing methodologies have been deployed over time
RFC 2544
https://tools.ietf.org/html/rfc2544
TCP testing
https://tools.ietf.org/html/rfc6349
And other more focused on metro ethernet like
https://en.wikipedia.org/wiki/Y.1564
Hope to help
Giuseppe
09-02-2019 11:02 AM
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide