cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
12028
Views
5
Helpful
5
Replies

Tunnel Interface "bandwidth" setting on Endpoints with Different Speeds

I understand that when building a GRE tunnel interface, you need to apply the "bandwidth" command set to the physical interface speed.  This is used for:

  • Routing protocols that operate over the tunnel interface, and take interface bandwidth into account (for example: OSPF)
  • "show interface" counters to show the correct data
  • remote monitoring tools to report correct utilization

 

I also understand that when using a LAG (aka "Etherchannel"), you need to specify the bandwidth of the combined rates of all active links.

 

However, it's unclear to me what setting to use when one end of the tunnel is at one rate (two 1G links) and the other end is at a different rate (two 10G links).  In this case, do I set both ends:

  • to the higher rate?
  • to the lower rate?
  • to their respective local rates?

 

Appreciate any insight anyone has.

2 Accepted Solutions

Accepted Solutions

Peter Paluch
Cisco Employee
Cisco Employee

Hello Weylin,

  • Routing protocols that operate over the tunnel interface, and take interface bandwidth into account (for example: OSPF)
  • "show interface" counters to show the correct data
  • remote monitoring tools to report correct utilization

Very nicely summed up!

I also understand that when using a LAG (aka "Etherchannel"), you need to specify the bandwidth of the combined rates of all active links.

This is somewhat debatable. On LAGs (EtherChannels, Port-channels), if you don't specify bandwidth manually, its value will be automatically derived based on the combined speed of all active physical member links. If you configure the bandwidth statically, the LAG will not reflect on its true aggregate speed, and instead always use the value you have configured. This may, or may not, be what you desire; there is no universal answer to this.

However, it's unclear to me what setting to use when one end of the tunnel is at one rate (two 1G links) and the other end is at a different rate (two 10G links).  In this case, do I set both ends:

  • to the higher rate?
  • to the lower rate?
  • to their respective local rates?

As a basic rule, the bandwidth on an interface should be set to a realistic value expressing the real throughput you can attain between the endpoints reachable directly through this interface.

With tunnels achored at devices with different uplink speeds, it makes most sense to set both ends to the lower rate. Think of this: A tunnel interface being just a virtual construct does not have a "bandwidth capacity" per se; instead, its true capacity is limited, among other things, by the attainable bandwidth in the underlying physical connectivity between tunnel endpoints. If one tunnel endpoint can carry at most 2Gbps while the other can carry 20Gbps at most, you are still limited to a maximum bandwidth of 2Gbps between these two particular tunnel endpoints - you cannot ever pass more than that, the physical connectivity does not allow that. So I would be looking into configuring the bandwidth on both ends to 2Gbps.

In addition, if the tunnel endpoints are using LAGs for mutual connectivity (you have said that one of them is using 2x1Gbps, the other is 2x10Gbps), then it might actually be possible that the true bandwidth of their interconnection is only 1Gbps. Consider this: The outer IP header of tunneled packets always contains the same SourceIP/DestinationIP combination. With the common src-dst-ip mechanism of load-balancing packets across physical LAG links, it is possible that all tunneled packets land on a single physical link only, limiting the true tunnel throughput to just 1Gbps. Different platforms may differ in their intelligence and might be able to reach beyond the GRE header to tap into the inner Layer3/Layer4 headers to possibly allow for more flexible load balancing but it should not be taken as something guaranteed.

Please feel welcome to ask further!

Best regards,
Peter

 

View solution in original post

Hi Weylin,

You are very much welcome.

As for looking at individual flows, it is probably useful to keep in mind that to a LAG, all GRE-tunneled traffic is a single flow, no matter how many internal flows it carries. This is because the outer IP header of the outgoing GRE-encapsulated traffic is always the same as far as addressing is concerned - the same source IP, the same destination IP for each and every outgoing packet. Such traffic would land on a single physical link on a LAG.

How does the device know that the GRE is traversing a LAG?  Especially if there's a number of available paths, only some of which would go over a LAG?  The routing table would know this, but is a routing table lookup performed to determine this?

Yes, it is. Essentially, forwarding a packet through a Tunnel interface means, literally, "prepend the tunnel headers to the existing packet, and perform a new routing lookup on the new resulting packet as if it was just received". So, after a packet is prepended with the outer IP header and a GRE header, the new packet is then routed based on the routing table contents, and if the path to the destination (in this case, to the tunnel tailend) resolves through a LAG, then the physical link will be chosen based on the current load-balancing scheme configured for the LAG.

Does the "tunnel interface automatically knows underlying hardware rate" if the two links were not actually LAG'd, but rather just ECMP?

To my best knowledge, no - a tunnel interface never inherits the bandwidth value of the underlying physical infrastructure. You always have to set it manually to something sensible. By default, older IOSes set the bandwidth of a Tunnel interface to 9 Kbps; on a 15.5 IOS I've just tested, the default bandwidth of a Tunnel interface is 100 Kbps. None of them reflects the bandwidth of the true physical egress interface the tunnel "went out".

As always, feel welcome to ask further!

Best regards,
Peter

View solution in original post

5 Replies 5

Peter Paluch
Cisco Employee
Cisco Employee

Hello Weylin,

  • Routing protocols that operate over the tunnel interface, and take interface bandwidth into account (for example: OSPF)
  • "show interface" counters to show the correct data
  • remote monitoring tools to report correct utilization

Very nicely summed up!

I also understand that when using a LAG (aka "Etherchannel"), you need to specify the bandwidth of the combined rates of all active links.

This is somewhat debatable. On LAGs (EtherChannels, Port-channels), if you don't specify bandwidth manually, its value will be automatically derived based on the combined speed of all active physical member links. If you configure the bandwidth statically, the LAG will not reflect on its true aggregate speed, and instead always use the value you have configured. This may, or may not, be what you desire; there is no universal answer to this.

However, it's unclear to me what setting to use when one end of the tunnel is at one rate (two 1G links) and the other end is at a different rate (two 10G links).  In this case, do I set both ends:

  • to the higher rate?
  • to the lower rate?
  • to their respective local rates?

As a basic rule, the bandwidth on an interface should be set to a realistic value expressing the real throughput you can attain between the endpoints reachable directly through this interface.

With tunnels achored at devices with different uplink speeds, it makes most sense to set both ends to the lower rate. Think of this: A tunnel interface being just a virtual construct does not have a "bandwidth capacity" per se; instead, its true capacity is limited, among other things, by the attainable bandwidth in the underlying physical connectivity between tunnel endpoints. If one tunnel endpoint can carry at most 2Gbps while the other can carry 20Gbps at most, you are still limited to a maximum bandwidth of 2Gbps between these two particular tunnel endpoints - you cannot ever pass more than that, the physical connectivity does not allow that. So I would be looking into configuring the bandwidth on both ends to 2Gbps.

In addition, if the tunnel endpoints are using LAGs for mutual connectivity (you have said that one of them is using 2x1Gbps, the other is 2x10Gbps), then it might actually be possible that the true bandwidth of their interconnection is only 1Gbps. Consider this: The outer IP header of tunneled packets always contains the same SourceIP/DestinationIP combination. With the common src-dst-ip mechanism of load-balancing packets across physical LAG links, it is possible that all tunneled packets land on a single physical link only, limiting the true tunnel throughput to just 1Gbps. Different platforms may differ in their intelligence and might be able to reach beyond the GRE header to tap into the inner Layer3/Layer4 headers to possibly allow for more flexible load balancing but it should not be taken as something guaranteed.

Please feel welcome to ask further!

Best regards,
Peter

 

Thanks Peter.  Interesting comment; I hadn't considered rate limit applied to an individual flow.  However, in my case, it's immaterial since it's >99.999% chance there will be a plethora of flows. 

 

Given your response, that means 2G on all tunnel interfaces.

 

Two separate but related follow-up questions, more theoretical than the specific issue I need to resolve - but, I'm also trying to craft the architecture document and defend it to my team, so these are still partinent.

  1. How does the device know that the GRE is traversing a LAG?  Especially if there's a number of available paths, only some of which would go over a LAG?  The routing table would know this, but is a routing table lookup performed to determine this?
  2. Does the "tunnel interface automatically knows underlying hardware rate" if the two links were not actually LAG'd, but rather just ECMP?

 

weylin

Hi Weylin,

You are very much welcome.

As for looking at individual flows, it is probably useful to keep in mind that to a LAG, all GRE-tunneled traffic is a single flow, no matter how many internal flows it carries. This is because the outer IP header of the outgoing GRE-encapsulated traffic is always the same as far as addressing is concerned - the same source IP, the same destination IP for each and every outgoing packet. Such traffic would land on a single physical link on a LAG.

How does the device know that the GRE is traversing a LAG?  Especially if there's a number of available paths, only some of which would go over a LAG?  The routing table would know this, but is a routing table lookup performed to determine this?

Yes, it is. Essentially, forwarding a packet through a Tunnel interface means, literally, "prepend the tunnel headers to the existing packet, and perform a new routing lookup on the new resulting packet as if it was just received". So, after a packet is prepended with the outer IP header and a GRE header, the new packet is then routed based on the routing table contents, and if the path to the destination (in this case, to the tunnel tailend) resolves through a LAG, then the physical link will be chosen based on the current load-balancing scheme configured for the LAG.

Does the "tunnel interface automatically knows underlying hardware rate" if the two links were not actually LAG'd, but rather just ECMP?

To my best knowledge, no - a tunnel interface never inherits the bandwidth value of the underlying physical infrastructure. You always have to set it manually to something sensible. By default, older IOSes set the bandwidth of a Tunnel interface to 9 Kbps; on a 15.5 IOS I've just tested, the default bandwidth of a Tunnel interface is 100 Kbps. None of them reflects the bandwidth of the true physical egress interface the tunnel "went out".

As always, feel welcome to ask further!

Best regards,
Peter

Just to clarify a point Peter has been making, all flows within a tunnel will be seen as one flow by Etherchannel or ECMP, multiple tunnels, though, would be seen as different flows and may be distributed across Etherchannel or ECMP links.

As Peter has also noted, generally you set a tunnel's bandwidth to the amount of bandwidth that particular tunnel might obtain as its maximum. Besides the tunnel's endpoint physical egress interface bandwidths, a tunnel's bandwidth might be constrained by something in the "middle" or the media. For example, you might reduce a tunnel's bandwidth to account for GRE (and IPSec) overhead, as the tunnel will never obtain the egress interface bandwidth.

Thanks Peter and Joseph for your replies! I've grown very fond of the Cisco support community, I find it more helpful than 90% of the communities "out there," with discussions such as this one being a stellar example. I appreciate your time in reading my question, as well as the time it must have taken you to write up these kind of in-depth replies.

Thank you very much!
Review Cisco Networking for a $25 gift card