cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
41624
Views
0
Helpful
114
Comments
xthuijs
Cisco Employee
Cisco Employee

Introduction

In this article we'll explain how dhcp relay is working and how broadcasts are forwarded within IOS-XR, because the DHCP relay functionality is configured and handled differently within XR vs regular IOS.

Generally broadcasts are not forwarded by routers as they typically only live on the local subnet. However sometimes it is required to forward these broadcasts and this article will detail out how that configuration works and what the limitations are.

 

Core Issue

Considering that broadcasts are handled locally on the subnet, a router will not forward these broadcasts out and locally consume them. This presents a problem when a particular interface is servicing a subnet with clients that want to use DHCP to obtain their address information. IOS-XR at the time of writing does not have a dhcp server which means that clients cannot obtain their address from an XR router.

Even if the OS would support DHCP server, like IOS, you may want to centralize your DHCP servers in a designated area of your network which would require you to forward the client's DHCP requests to that server.

 

IOS employs the concept of an ip-helper-address to take the broadcasts and send out a directed broadcast or unicast to one or multiple servers.

 

Note that by default broadcasts are NOT forwarded by a router.

Also only UDP broadcasts can be forwarded, dhcp (which is udp based) can benefit from this only.

 

Resolution

 

Broadcast forwarding

 

In order to forward udp broadcasts, IOS uses the ip helper-address configuration underneath the interface that is expected to receive broadcasts that are to be forwarded. This pertains then to ALL udp broadcasts that are received.

 

fp7200-6(config-if)#ip helper-address ?
  A.B.C.D  IP destination address

 

In IOS-XR, DHCP broadcasts are handled and configured differently then other udp based broadcasts.

 

For IOS-XR to forward broadcasts (non DHCP) the following configuration is required:

 

RP/0/RSP1/CPU0:A9K-BOTTOM(config)#forward-protocol udp ?
  <1-65535>    Port number

 

Some well known port numbers are converted into their names, such as DNS and NBNS

 

The interface configuration for IOS-XR looks like this:

 

RP/0/RSP1/CPU0:A9K-BOTTOM(config)#int g0/1/0/14
RP/0/RSP1/CPU0:A9K-BOTTOM(config-if)#ipv4 helper-address ?
  A.B.C.D  IPv4 destination address

 

Note again that this configuration will not forward DHCP packets.

 

DHCP

There are 2 key ways to handle DHCP, one is via DHCP RELAY and one is via DHCP PROXY.

 

Relay has been supported for a long time. XR421 adds the full capability of proxy (in combination with IP subscribers)

 

Configuration: in its simplist form

 

dhcp ipv4
profile MYGROUP relay
  helper-address vrf default 40.1.1.1
  helper-address vrf default 55.1.1.2
!
interface GigabitEthernet0/1/0/14 relay profile MYGROUP
!

 

There is no further interface specific configuration required as what you're used to from IOS-XR that all protocol specific configuration is done under the protocol definition and not split across different sections.

Relay

Is more dumb in the handling of dhcp broadcasts, we are effectively taking the dhcp broadcast messages and convert them into a unicast destination which are defined by the helper addresses and we send the messages back and forth, or in fact we are relaying them between the client and the helper addresses. We maintain no state pertaining to DHCP as we are simply sending them across. The return packets from the helpers are returned as broadcasts back to the subnet we are serving.

 

In dhcp relay mode (or broadcast forwarding for that matter) packets are sent to all helper addresses.

 

In XR dhcp relay, prior to XR 4.1 and as described by CSCto39520, we only forward one offer back to the client if we receive multiple offers from the various helper addresses. While this provides still redundancy, it is in certain cases not desirable to have only offer handed to the client.

The rationale in XR was that we'd be maintaining a temporary state that links the request ID to the interface, once the first offer comes in, that state is removed. This prevents the forwarding of subsequent offers.

While normally this state and interface linkage is not needed, because we could look up the giaddr to the interface and use that, in the scenario where we are unnumbered to a loopback serving multiple interfaces, this state is useful so we send it out only on the interested interface.

 

Post XR 4.1 we'll leave that state in place for 1 minute so all offers received in that time period are forwarded.

Memory is not an issue, we could theoretically serve 1M requests per minute.

However things are bound by the LPTS punt policer and the process prioritization.

 

For some actual numbers, DHCPv4-Relay over BVI can accommodate 200K sessions and qualified CPS rate is 150 RSP440 and 250 on RSP880. DHCPv4-Relay is stateless and there are no sessions created.

 

The 200K limitation is enforced because of Adjacency creation as the adjacency is framed with ARP learning for end-user while resolving gateway ip-addresses.

 

Consider 250 CPS (Calls Per second), since ASR9K acting as Relay so access-side 4 message transactions (Discover, Offer, Request, Ack) and also server-side 4 message transactions.

So, overall 8 messages transactions per session.

That means, overall 250*8*60 = 120K message transactions per second.

 

 

Currently dhcp replies being sent as broadcast to the client so after enabling the CLI "[no]broadcast-flag policy check" it will send unicast reply to the client.

 

Proxy

In dhcp PROXY mode we do maintain state as we are handling the dhcp request on behalf of the requesting client. It looks like as we are the originator of the request to the dhcp servers. Proxy for that matter is stateful.

Proxy also stores the binding on the proxy agent locally, which relay does not.

 

 

Setup configuration, sample operation and debug verification

Setup used:

Slide1.JPG

Note: One common forgotten thing is that both dhcp servers need to have a route back to the 39.1.1.1 address in this example.

So make sure there is a static route or when running an IGP that this interface/address is included (passively)


ASR9000 related configuration provided above.

IOS dhcp server configuration is as follows:

 

SERVER 1

ip dhcp excluded-address 39.1.1.1 39.1.1.100

 

ip dhcp pool SERVER1
   network 39.1.1.0 255.255.255.0
   default-router 39.1.1.1
   dns-server 1.2.3.4

 

SERVER 2

ip dhcp excluded-address 39.1.1.1 39.1.1.200

 

ip dhcp pool SERVER2
   network 39.1.1.0 255.255.255.0
   default-router 39.1.1.1
   dns-server 6.7.8.9
!

 

No specific interface configuration required on the interfaces other then ip addresses and proper routing.

 

DEBUGS

 

dhcpd relay all flag is ON
dhcpd relay errors flag is ON
dhcpd relay events flag is ON
dhcpd relay internals flag is ON
dhcpd errors flag is ON
dhcpd events flag is ON
dhcpd packet flag is ON


RP/0/RSP1/CPU0:A9K-BOTTOM(config-if)#RP/0/RSP1/CPU0:Mar 29 13:23:27.980 : dhcpd[1064]: DHCPD PACKET: TP564: L3 Packet RX from addr = 0.0.0.0, port = 68, application len 576, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.980 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_conn_hlr: recv, src 0.0.0.0 dst 255.255.255.255, L3I GigabitEthernet0_1_0_14, Null output,
RP/0/RSP1/CPU0:Mar 29 13:23:27.981 : dhcpd[1064]: DHCPD PACKET: TP608: BOOTREQUEST Rx, chaddr 0003.a0fd.28a8, interface GigabitEthernet0_1_0_14
RP/0/RSP1/CPU0:Mar 29 13:23:27.981 : dhcpd[1064]: DHCPD PACKET: TP982: DISCOVER Rx, base mode, interface GigabitEthernet0_1_0_14, chaddr 0003.a0fd.28a8

 

received dhcp request on interface


RP/0/RSP1/CPU0:Mar 29 13:23:27.981 : dhcpd[1064]: DHCPD PACKET: TP615: DISCOVER Rx, Relay chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.981 : dhcpd[1064]: DHCPD PACKET: TP790: Client request opt 82 not untrust, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.982 : dhcpd[1064]: DHCPD PACKET: TP791: Giaddr not present, Set giaddr 39.1.1.2, chaddr 0003.a0fd.28a8

 

Setting GIADDR to the interface receiving the packet, this GIADDR is used to locate the right pool on the servers. So the server sneed to have a pool for the 39.1.1.0/24 subnet.


RP/0/RSP1/CPU0:Mar 29 13:23:27.982 : dhcpd[1064]: DHCPD PACKET: TP792: Client request opt 82 nothing to add, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.982 : dhcpd[1064]: DHCPD PACKET: TP571: L3 packet TX unicast to dest 40.1.1.1, port 67, source 39.1.1.2, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)

 

Unicast packet forwarded to the first helper


RP/0/RSP1/CPU0:Mar 29 13:23:27.982 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_unicast_packet: xmit, src 0.0.0.0 dst 255.255.255.255, L3I GigabitEthernet0_1_0_14, Null output, FROM L3
RP/0/RSP1/CPU0:Mar 29 13:23:27.982 : dhcpd[1064]: DHCPD PACKET: TP571: L3 packet TX unicast to dest 55.1.1.2, port 67, source 39.1.1.2, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)

 

Unicast packet forwarded to the second helper

they are sent in the order of configuration and the configuration is maintained in order from low to high ip address.


RP/0/RSP1/CPU0:Mar 29 13:23:27.983 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_unicast_packet: xmit, src 0.0.0.0 dst 255.255.255.255, L3I GigabitEthernet0_1_0_14, Null output, FROM L3
RP/0/RSP1/CPU0:Mar 29 13:23:27.983 : dhcpd[1064]: DHCPD PACKET: TP835: DISCOVER Tx, chaddr 0003.a0fd.28a8

 


RP/0/RSP1/CPU0:Mar 29 13:23:27.984 : dhcpd[1064]: DHCPD PACKET: TP564: L3 Packet RX from addr = 40.1.1.1, port = 67, application len 300, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.984 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_conn_hlr: recv, src 40.1.1.1 dst 39.1.1.2, L3I GigabitEthernet0_1_0_1, Null output,
RP/0/RSP1/CPU0:Mar 29 13:23:27.984 : dhcpd[1064]: DHCPD PACKET: TP609: BOOTREPLY Rx, chaddr 0003.a0fd.28a8, interface GigabitEthernet0_1_0_1
RP/0/RSP1/CPU0:Mar 29 13:23:27.985 : dhcpd[1064]: DHCPD PACKET: TP616: OFFER Rx, Relay chaddr 0003.a0fd.28a8, yiaddr 39.1.1.216

 

Offer received and IP address offered to us


RP/0/RSP1/CPU0:Mar 29 13:23:27.985 : dhcpd[1064]: DHCPD PACKET: TP799: Server reply opt 82 not present, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.985 : dhcpd[1064]: DHCPD PACKET: TP892: Broadcast-flag policy Ignore, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.985 : dhcpd[1064]: DHCPD PACKET: TP572: L3 packet TX bcast on intf GigabitEthernet0/1/0/14 to port 68, source 39.1.1.2, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.985 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_b
roadcast_packet: xmit, src 40.1.1.1 dst 39.1.1.2, L3I GigabitEthernet0_1_0_1, Null output, FROM L3
RP/0/RSP1/CPU0:Mar 29 13:23:27.985 : dhcpd[1064]: DHCPD PACKET: TP838: OFFER Tx, chaddr 0003.a0fd.28a8, yiaddr 39.1.1.216
RP/0/RSP1/CPU0:Mar 29 13:23:27.991 : dhcpd[1064]: DHCPD PACKET: TP564: L3 Packet RX from addr = 0.0.0.0, port = 68, application len 576, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.991 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_conn_hlr: recv, src 0.0.0.0 dst 255.255.255.255, L3I GigabitEthernet0_1_0_14, Null output,
RP/0/RSP1/CPU0:Mar 29 13:23:27.991 : dhcpd[1064]: DHCPD PACKET: TP608: BOOTREQUEST Rx, chaddr 0003.a0fd.28a8, interface GigabitEthernet0_1_0_14
RP/0/RSP1/CPU0:Mar 29 13:23:27.991 : dhcpd[1064]: DHCPD PACKET: TP617: REQUEST Rx, Relay chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.991 : dhcpd[1064]: DHCPD PACKET: TP790: Client request opt 82 not untrust, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.992 : dhcpd[1064]: DHCPD PACKET: TP791: Giaddr not present, Set giaddr 39.1.1.2, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.992 : dhcpd[1064]: DHCPD PACKET: TP792: Client request opt 82 nothing to add, chaddr 0003.a0fd.28a8

 

Request packet received from client.


RP/0/RSP1/CPU0:Mar 29 13:23:27.992 : dhcpd[1064]: DHCPD PACKET: TP571: L3 packet TX unicast to dest 40.1.1.1, port 67, source 39.1.1.2, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.992 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_unicast_packet: xmit, src 0.0.0.0 dst 255.255.255.255, L3I GigabitEthernet0_1_0_14, Null output, FROM L3

 

Forwarded to helper 1


RP/0/RSP1/CPU0:Mar 29 13:23:27.992 : dhcpd[1064]: DHCPD PACKET: TP571: L3 packet TX unicast to dest 55.1.1.2, port 67, source 39.1.1.2, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.993 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_unicast_packet: xmit, src 0.0.0.0 dst 255.255.255.255, L3I GigabitEthernet0_1_0_14, Null output, FROM L3

 

Forwarded to helper 2, our reply is sent to both servers


RP/0/RSP1/CPU0:Mar 29 13:23:27.993 : dhcpd[1064]: DHCPD PACKET: TP841: REQUEST Tx, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.994 : dhcpd[1064]: DHCPD PACKET: TP564: L3 Packet
RX from addr = 40.1.1.1, port = 67, application len 300, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.994 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_conn_hlr: recv, src 40.1.1.1 dst 39.1.1.2, L3I GigabitEthernet0_1_0_1, Null output,
RP/0/RSP1/CPU0:Mar 29 13:23:27.994 : dhcpd[1064]: DHCPD PACKET: TP609: BOOTREPLY Rx, chaddr 0003.a0fd.28a8, interface GigabitEthernet0_1_0_1
RP/0/RSP1/CPU0:Mar 29 13:23:27.994 : dhcpd[1064]: DHCPD PACKET: TP619: ACK Rx, Relay chaddr 0003.a0fd.28a8, yiaddr 39.1.1.216
RP/0/RSP1/CPU0:Mar 29 13:23:27.994 : dhcpd[1064]: DHCPD PACKET: TP799: Server reply opt 82 not present, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.994 : dhcpd[1064]: DHCPD PACKET: TP892: Broadcast-flag policy Ignore, chaddr 0003.a0fd.28a8

 

ACK received from our server


RP/0/RSP1/CPU0:Mar 29 13:23:27.995 : dhcpd[1064]: DHCPD PACKET: TP572: L3 packet TX bcast on intf GigabitEthernet0/1/0/14 to port 68, source 39.1.1.2, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.995 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_broadcast_packet: xmit, src 40.1.1.1 dst 39.1.1.2, L3I GigabitEthernet0_1_0_1, Null output, FROM L3
RP/0/RSP1/CPU0:Mar 29 13:23:27.995 : dhcpd[1064]: DHCPD PACKET: TP847: ACK Tx, chaddr 0003.a0fd.28a8, yiaddr 39.1.1.216
RP/0/RSP1/CPU0:Mar 29 13:23:29.985 : dhcpd[1064]: DHCPD PACKET: TP564: L3 Packet RX from addr = 55.1.1.2, port = 67, application len 300, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:29.985 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_conn_hlr: recv, src 55.1.1.2 dst 39.1.1.2, L3I GigabitEthernet0_1_0_0, Null output,
RP/0/RSP1/CPU0:Mar 29 13:23:29.986 : dhcpd[1064]: DHCPD PACKET: TP609: BOOTREPLY Rx, chaddr 0003.a0fd.28a8, interface GigabitEthernet0_1_0_0
RP/0/RSP1/CPU0:Mar 29 13:23:29.986 : dhcpd[1064]: DHCPD PACKET: TP259: BOOTREPLY Drop, Client not found, interface GigabitEthernet0_1_0_0, chaddr 0003.a0fd.28a8

 

This is the symptom described, the 2nd offer pre XR 4.1 is dropped by the dhcp relay agent in ios-xr. Post 4.1 we will forward all offers to the client.

 

Slide2.JPG

This picture illustrates that behavior whereby the 2nd offer is not forwarded. In this case the red line is dashed meaning it made it through. The solid green line is not forwarded and is dropped by the ASR9000 dhcp relay agent.

 

How RELAY, SNOOPING and PROXY scale, come together and its relation to BNG

 

Relay

if you have relay then we punt the dhcp packets from the client send them over to the dhcp process, convert them in a directed broadcast or unicast
towards the helper and really relay only the messages back and forth. Other then CPU there is no operational overhead. No state is maintained,
except for a few second “hole” between server and client for when the offer comes back.

Relay scale

There is no scale associated with this per-se other then cpu (ran off the LC CPU for phy/subinterfaces and RP for bundle/bvi).

 

Proxy

if you have proxy, the transaction is controlled by the dhcp process. so client talks to a9k. a9k talks to the dhcp server and it will maintain
a binding that expires on lease and absence of a renew.

 

Proxy Scale

The scale associated here is cpu for the message transaction. ARP interaction since dhcp proxy will install a arp forwarding adj also
and memory for the binding table itself.
the other scale factor is here how far the TX ADJ table can grow from ARP, we tested up to 128k, but if you have enough shared mem, this can go
further than that, depending on the LC CPU mem (phy sub/if) or RP (bvi/bundle) can increase its shmem window (this is really like a ram disk and can
dynamically grow as needed). the other part is the tx adj table of the npu, there is a defined limit of about 128k I thought it was (mem size in resolve)

Snooping

Then there is snooping; this is really the same as proxy, but in an L2 environment AND with the notion that the binding table built is installed in the hw
forwarding. this is capped at 32k per npu. Advantage of snooping is that with the binding in hw, we can do extra verification checks on ingress to make
sure the mac/ip/port matches what is in the table. for egress outbound the tx adj is used installed.

 

Bng and Proxy

Now when you do BNG with proxy, it combines the operation of proxy and snooping:
a binding created by proxy is now installed in hw also, with those checks mention in effect. this caps bng/proxy to 32k per npu.

Summary

For proxy, snoop, bng/proxy
the 128k limit is really a testing limit defined by:
- (lc/rp) cpu capabity
- (lc/rp) cpu memory
- tx adj table

for snoop and bng/proxy since it is hw installed it caps it at
32k per npu, which is a hw limit.

for those sessions not needing bng, but requiring dhcp services, to allow for better mem/cpu control and scale, it might make more sense to do relay
here, but there is nothing against use of proxy per-se.

 

 

Related Information

All broadcasts are handled on the RP, in the ASR9000 or any XR platform for that matter, we use LPTS (local packet transport services) similar as what IOS calls COPP (control plane policing)

The policing is on a per NPU basis. Look at Packet trace and troubleshooting in the ASR9000

for more information on packet troubleshooting in the ASR9000.

Comments
gogie
Level 1
Level 1
We are having strange issue with BNG. We need to run DHCP in server mode, and in this case some old CPEs are not working.
Things boil down to the fact, that when BNG is acting as DHCP server, it is placing DHCP option 53 (message type) to the end of the packet, and some dumb CPEs are unable to parse it there.
However, when BNG is used as DHCP relay or proxy, Option 53 is placed in the beginning of the packet, and dumb CPEs have no problems.
What can we do about it? Is it possible to somehow influence BNG DHCP to place Option 53 in the beginning of the message?
xthuijs
Cisco Employee
Cisco Employee

what type of cpe and fw is it that has a problem with this (just for interest)?

there is no way to customize the order of options in the packet, only thing I can offer is to file a tac case to request a sw patch to have the message type as the first option in the pack to be created.

xander

gogie
Level 1
Level 1

Thanks Xander,

Number of older Planet and Zyxel ADSL modems in Routed mode, that I know for sure. Will have a look if never firmware on those helps, also what TAC will say.

gogie
Level 1
Level 1

We have deployed 5.3.4 EFT in lab and order of DHCP options is changed, that is, Option 53 is in the beginning. Most likely it will solve the problem, will try with bad CPEs.

xthuijs
Cisco Employee
Cisco Employee

hi gogie, thanks for the follow up, good to hear that there is a change that might solve this problem with this CPE type.

xander

Hi,

I need to enable a simple DHCP relay function on an interface. Basically just receiving the broadcast DHCP Discover and unicasting it to the DHCP server in the global vrf.

This is the config I used is as per above which works fine

dhcp ipv4
profile TEST relay
  helper-address vrf default 10.0.0.1
!
interface Bundle-Ether401.2 relay profile TEST

I'd like to ask some clarifications:

Q1. As soon as I enabled the dhcp profile it I got this log:

SUBSCRIBER-SUB_UTIL-5-SESSION_THROTTLE : Subscriber Infra is not ready. Reason: [BNG is deactivated]

Do I actually need a BNG license just for enabling the DHCP relay feature ?


Q2. Since DHCP relay process seems to be taken care of by the RP CPU (RSP440 in my case), are there any guidelines for scale, in terms of number of DHCP clients and number of simultaneous relayed messages that the box can comfortable handle ?

thanks

Mark

xthuijs
Cisco Employee
Cisco Employee

hi mark,

yeah you know the DHCP is closely tied to BNG, there is some functionality that is used by dhcp that is part of the iedge daemon that is in the BNG pie.

For standard proxy, snoop or relay, there is no need for the BNG pie (in general, however we had some releases I thought it was in xr51x whereby that bng pie was needed just to provide the hooks for dhcp, still no license needed, until only you start to terminate subscribers a-la bng).

So you can just take the message as informational and in this case that per subscriber throttling is not available as that requires the bng functionality.

because you are using an access interface of bundle, the dhcp runs on the RSP.

relay is very simple: message in, convert, message out. this goes as fast as the cpu can drain the receive queue and inject the packets. 1000pps would not worry this boat.

lpts however is limiting the number of dhcp discovers that the LC side can receive and punt for processing. this lpts setting is on a per LC per NPU bases. meaning if the value is 1000pps, the LC with say 2 npu's can punt for 2000pps if BOTH are enalbed for this func obviously.

xander

hi xander,

thanks for the reply.

just realised i forgot the details of the platform ;)

bundle is shared across two mod80 10GE ports. 4x10GE MPA on each MOD80, running rsp440 with 5.3.3

So in essence, I dont need BNG pie or license for simple relay, as I will not be doing any specific subscriber termination, just traffic flow (termination will happen elsewhere on a wireless controller and isg)

Being MOD80 with one NPU per MPA, and bundle shared across both MOD80, Lpts will allow for total 2000pps.  So if we assume 10pps per DHCP transaction (which is a huge margin ?) we can safely do 200 DHCP transactions per second before LPTS kicks in

cheers

Mark

xthuijs
Cisco Employee
Cisco Employee

hi mark,

ok perfect in xr53 the lpts policer for dhcp is 4k pps. you'll get that number per NP, so a bundle with 2 members on 2 different NPU's (or LC) will totally get 8k towards the RSP.

this is doable. one and other is dependent on what other activities the handler/RSP has, if you have it to spare, you can use it for this purpose!

In XR 53 you are ok, and dont need the BNG pie.

cheers

xander

Great thanks

wilsonribeiro
Level 1
Level 1

IOS 6.0.2,

I have:

interface Bundle-Ether1

 ipv4 address 177.54.x.x 255.255.255.248
ipv4 address 177.54.y.y 255.255.255.0 secondary
!
interface Bundle-Ether1.1349
ipv4 address 177.54.z.z 255.255.255.248
encapsulation dot1q 1349
!

I can see arp requests on bundle-ether1.1349, which I would like to stop.

18:07:40.707816 00:c1:64:44:7e:73 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 177.54.x.x tell 177.54.x.x, length 46 
Aleksandar Vidakovic
Cisco Employee
Cisco Employee

hi Wilson,

this seems to be a bug. Although we do support an L3 main BE interface with sub-interfaces, it is a somewhat unusual design solution. Is this something that you are just trying out in the lab or do you have a real-life scenario where this is causing problems?

regards,

/Aleksandar

wilsonribeiro
Level 1
Level 1

Hi,

real-life scenario causing problems.

Aleksandar Vidakovic
Cisco Employee
Cisco Employee

hi Wilson,

can you please open a TAC SR for this? That way we will be able to track the issue properly in our databases.

regards,

/Aleksandar

Your posts never cease to amaze me.  Great stuff.  I know this is old but I am hoping that you (or anyone) could assist.  I posted this on another part of the forums but didn't get a response.  Could be in the wrong location.

So we have a ip helper issue that I cannot figure out for the life of me.  From source to destination is 5 hops.  They are a mixture of ios, ios-xr and nx-os.  I will try to explain it as best as possible without confusing everyone.

Source lives off of a 4948e (IOS).  Source could reach destination and vice versa.  Helper is configured as I normally do (I have done many times) WITH the exception that the destination lives in a VM environment.  I think this is what is throwing me off, possibly.

The next three hops are all IOS-XR boxes (ASR9010) with the destination living off of a Nexus 6k1.

Configuration all looks good.  Do I have to do anything on the transit path; on those routers?  I had a sniffer on the destination switch but didn't see any DHCP traffic come in.  I then started to run debugs (had to limit them) and saw it leave the source, hit the next hop, but that was it.  Didn't see any more traffic on that third hop so now my thinking is something may need to be done there.  

I'm completely stuck.  If anyone needs me to post configurations or do some checks, I certainly will.  Just to recap:

Source lives on 4948e (IOS)

Next 3 hops are IOS-XR devices

Destination lives on Nexus 6k1

Hope someone could assist.  

Thanks.

-Michael

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Quick Links