cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
42196
Views
0
Helpful
114
Comments
xthuijs
Cisco Employee
Cisco Employee

Introduction

In this article we'll explain how dhcp relay is working and how broadcasts are forwarded within IOS-XR, because the DHCP relay functionality is configured and handled differently within XR vs regular IOS.

Generally broadcasts are not forwarded by routers as they typically only live on the local subnet. However sometimes it is required to forward these broadcasts and this article will detail out how that configuration works and what the limitations are.

 

Core Issue

Considering that broadcasts are handled locally on the subnet, a router will not forward these broadcasts out and locally consume them. This presents a problem when a particular interface is servicing a subnet with clients that want to use DHCP to obtain their address information. IOS-XR at the time of writing does not have a dhcp server which means that clients cannot obtain their address from an XR router.

Even if the OS would support DHCP server, like IOS, you may want to centralize your DHCP servers in a designated area of your network which would require you to forward the client's DHCP requests to that server.

 

IOS employs the concept of an ip-helper-address to take the broadcasts and send out a directed broadcast or unicast to one or multiple servers.

 

Note that by default broadcasts are NOT forwarded by a router.

Also only UDP broadcasts can be forwarded, dhcp (which is udp based) can benefit from this only.

 

Resolution

 

Broadcast forwarding

 

In order to forward udp broadcasts, IOS uses the ip helper-address configuration underneath the interface that is expected to receive broadcasts that are to be forwarded. This pertains then to ALL udp broadcasts that are received.

 

fp7200-6(config-if)#ip helper-address ?
  A.B.C.D  IP destination address

 

In IOS-XR, DHCP broadcasts are handled and configured differently then other udp based broadcasts.

 

For IOS-XR to forward broadcasts (non DHCP) the following configuration is required:

 

RP/0/RSP1/CPU0:A9K-BOTTOM(config)#forward-protocol udp ?
  <1-65535>    Port number

 

Some well known port numbers are converted into their names, such as DNS and NBNS

 

The interface configuration for IOS-XR looks like this:

 

RP/0/RSP1/CPU0:A9K-BOTTOM(config)#int g0/1/0/14
RP/0/RSP1/CPU0:A9K-BOTTOM(config-if)#ipv4 helper-address ?
  A.B.C.D  IPv4 destination address

 

Note again that this configuration will not forward DHCP packets.

 

DHCP

There are 2 key ways to handle DHCP, one is via DHCP RELAY and one is via DHCP PROXY.

 

Relay has been supported for a long time. XR421 adds the full capability of proxy (in combination with IP subscribers)

 

Configuration: in its simplist form

 

dhcp ipv4
profile MYGROUP relay
  helper-address vrf default 40.1.1.1
  helper-address vrf default 55.1.1.2
!
interface GigabitEthernet0/1/0/14 relay profile MYGROUP
!

 

There is no further interface specific configuration required as what you're used to from IOS-XR that all protocol specific configuration is done under the protocol definition and not split across different sections.

Relay

Is more dumb in the handling of dhcp broadcasts, we are effectively taking the dhcp broadcast messages and convert them into a unicast destination which are defined by the helper addresses and we send the messages back and forth, or in fact we are relaying them between the client and the helper addresses. We maintain no state pertaining to DHCP as we are simply sending them across. The return packets from the helpers are returned as broadcasts back to the subnet we are serving.

 

In dhcp relay mode (or broadcast forwarding for that matter) packets are sent to all helper addresses.

 

In XR dhcp relay, prior to XR 4.1 and as described by CSCto39520, we only forward one offer back to the client if we receive multiple offers from the various helper addresses. While this provides still redundancy, it is in certain cases not desirable to have only offer handed to the client.

The rationale in XR was that we'd be maintaining a temporary state that links the request ID to the interface, once the first offer comes in, that state is removed. This prevents the forwarding of subsequent offers.

While normally this state and interface linkage is not needed, because we could look up the giaddr to the interface and use that, in the scenario where we are unnumbered to a loopback serving multiple interfaces, this state is useful so we send it out only on the interested interface.

 

Post XR 4.1 we'll leave that state in place for 1 minute so all offers received in that time period are forwarded.

Memory is not an issue, we could theoretically serve 1M requests per minute.

However things are bound by the LPTS punt policer and the process prioritization.

 

For some actual numbers, DHCPv4-Relay over BVI can accommodate 200K sessions and qualified CPS rate is 150 RSP440 and 250 on RSP880. DHCPv4-Relay is stateless and there are no sessions created.

 

The 200K limitation is enforced because of Adjacency creation as the adjacency is framed with ARP learning for end-user while resolving gateway ip-addresses.

 

Consider 250 CPS (Calls Per second), since ASR9K acting as Relay so access-side 4 message transactions (Discover, Offer, Request, Ack) and also server-side 4 message transactions.

So, overall 8 messages transactions per session.

That means, overall 250*8*60 = 120K message transactions per second.

 

 

Currently dhcp replies being sent as broadcast to the client so after enabling the CLI "[no]broadcast-flag policy check" it will send unicast reply to the client.

 

Proxy

In dhcp PROXY mode we do maintain state as we are handling the dhcp request on behalf of the requesting client. It looks like as we are the originator of the request to the dhcp servers. Proxy for that matter is stateful.

Proxy also stores the binding on the proxy agent locally, which relay does not.

 

 

Setup configuration, sample operation and debug verification

Setup used:

Slide1.JPG

Note: One common forgotten thing is that both dhcp servers need to have a route back to the 39.1.1.1 address in this example.

So make sure there is a static route or when running an IGP that this interface/address is included (passively)


ASR9000 related configuration provided above.

IOS dhcp server configuration is as follows:

 

SERVER 1

ip dhcp excluded-address 39.1.1.1 39.1.1.100

 

ip dhcp pool SERVER1
   network 39.1.1.0 255.255.255.0
   default-router 39.1.1.1
   dns-server 1.2.3.4

 

SERVER 2

ip dhcp excluded-address 39.1.1.1 39.1.1.200

 

ip dhcp pool SERVER2
   network 39.1.1.0 255.255.255.0
   default-router 39.1.1.1
   dns-server 6.7.8.9
!

 

No specific interface configuration required on the interfaces other then ip addresses and proper routing.

 

DEBUGS

 

dhcpd relay all flag is ON
dhcpd relay errors flag is ON
dhcpd relay events flag is ON
dhcpd relay internals flag is ON
dhcpd errors flag is ON
dhcpd events flag is ON
dhcpd packet flag is ON


RP/0/RSP1/CPU0:A9K-BOTTOM(config-if)#RP/0/RSP1/CPU0:Mar 29 13:23:27.980 : dhcpd[1064]: DHCPD PACKET: TP564: L3 Packet RX from addr = 0.0.0.0, port = 68, application len 576, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.980 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_conn_hlr: recv, src 0.0.0.0 dst 255.255.255.255, L3I GigabitEthernet0_1_0_14, Null output,
RP/0/RSP1/CPU0:Mar 29 13:23:27.981 : dhcpd[1064]: DHCPD PACKET: TP608: BOOTREQUEST Rx, chaddr 0003.a0fd.28a8, interface GigabitEthernet0_1_0_14
RP/0/RSP1/CPU0:Mar 29 13:23:27.981 : dhcpd[1064]: DHCPD PACKET: TP982: DISCOVER Rx, base mode, interface GigabitEthernet0_1_0_14, chaddr 0003.a0fd.28a8

 

received dhcp request on interface


RP/0/RSP1/CPU0:Mar 29 13:23:27.981 : dhcpd[1064]: DHCPD PACKET: TP615: DISCOVER Rx, Relay chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.981 : dhcpd[1064]: DHCPD PACKET: TP790: Client request opt 82 not untrust, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.982 : dhcpd[1064]: DHCPD PACKET: TP791: Giaddr not present, Set giaddr 39.1.1.2, chaddr 0003.a0fd.28a8

 

Setting GIADDR to the interface receiving the packet, this GIADDR is used to locate the right pool on the servers. So the server sneed to have a pool for the 39.1.1.0/24 subnet.


RP/0/RSP1/CPU0:Mar 29 13:23:27.982 : dhcpd[1064]: DHCPD PACKET: TP792: Client request opt 82 nothing to add, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.982 : dhcpd[1064]: DHCPD PACKET: TP571: L3 packet TX unicast to dest 40.1.1.1, port 67, source 39.1.1.2, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)

 

Unicast packet forwarded to the first helper


RP/0/RSP1/CPU0:Mar 29 13:23:27.982 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_unicast_packet: xmit, src 0.0.0.0 dst 255.255.255.255, L3I GigabitEthernet0_1_0_14, Null output, FROM L3
RP/0/RSP1/CPU0:Mar 29 13:23:27.982 : dhcpd[1064]: DHCPD PACKET: TP571: L3 packet TX unicast to dest 55.1.1.2, port 67, source 39.1.1.2, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)

 

Unicast packet forwarded to the second helper

they are sent in the order of configuration and the configuration is maintained in order from low to high ip address.


RP/0/RSP1/CPU0:Mar 29 13:23:27.983 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_unicast_packet: xmit, src 0.0.0.0 dst 255.255.255.255, L3I GigabitEthernet0_1_0_14, Null output, FROM L3
RP/0/RSP1/CPU0:Mar 29 13:23:27.983 : dhcpd[1064]: DHCPD PACKET: TP835: DISCOVER Tx, chaddr 0003.a0fd.28a8

 


RP/0/RSP1/CPU0:Mar 29 13:23:27.984 : dhcpd[1064]: DHCPD PACKET: TP564: L3 Packet RX from addr = 40.1.1.1, port = 67, application len 300, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.984 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_conn_hlr: recv, src 40.1.1.1 dst 39.1.1.2, L3I GigabitEthernet0_1_0_1, Null output,
RP/0/RSP1/CPU0:Mar 29 13:23:27.984 : dhcpd[1064]: DHCPD PACKET: TP609: BOOTREPLY Rx, chaddr 0003.a0fd.28a8, interface GigabitEthernet0_1_0_1
RP/0/RSP1/CPU0:Mar 29 13:23:27.985 : dhcpd[1064]: DHCPD PACKET: TP616: OFFER Rx, Relay chaddr 0003.a0fd.28a8, yiaddr 39.1.1.216

 

Offer received and IP address offered to us


RP/0/RSP1/CPU0:Mar 29 13:23:27.985 : dhcpd[1064]: DHCPD PACKET: TP799: Server reply opt 82 not present, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.985 : dhcpd[1064]: DHCPD PACKET: TP892: Broadcast-flag policy Ignore, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.985 : dhcpd[1064]: DHCPD PACKET: TP572: L3 packet TX bcast on intf GigabitEthernet0/1/0/14 to port 68, source 39.1.1.2, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.985 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_b
roadcast_packet: xmit, src 40.1.1.1 dst 39.1.1.2, L3I GigabitEthernet0_1_0_1, Null output, FROM L3
RP/0/RSP1/CPU0:Mar 29 13:23:27.985 : dhcpd[1064]: DHCPD PACKET: TP838: OFFER Tx, chaddr 0003.a0fd.28a8, yiaddr 39.1.1.216
RP/0/RSP1/CPU0:Mar 29 13:23:27.991 : dhcpd[1064]: DHCPD PACKET: TP564: L3 Packet RX from addr = 0.0.0.0, port = 68, application len 576, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.991 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_conn_hlr: recv, src 0.0.0.0 dst 255.255.255.255, L3I GigabitEthernet0_1_0_14, Null output,
RP/0/RSP1/CPU0:Mar 29 13:23:27.991 : dhcpd[1064]: DHCPD PACKET: TP608: BOOTREQUEST Rx, chaddr 0003.a0fd.28a8, interface GigabitEthernet0_1_0_14
RP/0/RSP1/CPU0:Mar 29 13:23:27.991 : dhcpd[1064]: DHCPD PACKET: TP617: REQUEST Rx, Relay chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.991 : dhcpd[1064]: DHCPD PACKET: TP790: Client request opt 82 not untrust, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.992 : dhcpd[1064]: DHCPD PACKET: TP791: Giaddr not present, Set giaddr 39.1.1.2, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.992 : dhcpd[1064]: DHCPD PACKET: TP792: Client request opt 82 nothing to add, chaddr 0003.a0fd.28a8

 

Request packet received from client.


RP/0/RSP1/CPU0:Mar 29 13:23:27.992 : dhcpd[1064]: DHCPD PACKET: TP571: L3 packet TX unicast to dest 40.1.1.1, port 67, source 39.1.1.2, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.992 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_unicast_packet: xmit, src 0.0.0.0 dst 255.255.255.255, L3I GigabitEthernet0_1_0_14, Null output, FROM L3

 

Forwarded to helper 1


RP/0/RSP1/CPU0:Mar 29 13:23:27.992 : dhcpd[1064]: DHCPD PACKET: TP571: L3 packet TX unicast to dest 55.1.1.2, port 67, source 39.1.1.2, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.993 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_unicast_packet: xmit, src 0.0.0.0 dst 255.255.255.255, L3I GigabitEthernet0_1_0_14, Null output, FROM L3

 

Forwarded to helper 2, our reply is sent to both servers


RP/0/RSP1/CPU0:Mar 29 13:23:27.993 : dhcpd[1064]: DHCPD PACKET: TP841: REQUEST Tx, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.994 : dhcpd[1064]: DHCPD PACKET: TP564: L3 Packet
RX from addr = 40.1.1.1, port = 67, application len 300, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.994 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_conn_hlr: recv, src 40.1.1.1 dst 39.1.1.2, L3I GigabitEthernet0_1_0_1, Null output,
RP/0/RSP1/CPU0:Mar 29 13:23:27.994 : dhcpd[1064]: DHCPD PACKET: TP609: BOOTREPLY Rx, chaddr 0003.a0fd.28a8, interface GigabitEthernet0_1_0_1
RP/0/RSP1/CPU0:Mar 29 13:23:27.994 : dhcpd[1064]: DHCPD PACKET: TP619: ACK Rx, Relay chaddr 0003.a0fd.28a8, yiaddr 39.1.1.216
RP/0/RSP1/CPU0:Mar 29 13:23:27.994 : dhcpd[1064]: DHCPD PACKET: TP799: Server reply opt 82 not present, chaddr 0003.a0fd.28a8
RP/0/RSP1/CPU0:Mar 29 13:23:27.994 : dhcpd[1064]: DHCPD PACKET: TP892: Broadcast-flag policy Ignore, chaddr 0003.a0fd.28a8

 

ACK received from our server


RP/0/RSP1/CPU0:Mar 29 13:23:27.995 : dhcpd[1064]: DHCPD PACKET: TP572: L3 packet TX bcast on intf GigabitEthernet0/1/0/14 to port 68, source 39.1.1.2, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:27.995 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_broadcast_packet: xmit, src 40.1.1.1 dst 39.1.1.2, L3I GigabitEthernet0_1_0_1, Null output, FROM L3
RP/0/RSP1/CPU0:Mar 29 13:23:27.995 : dhcpd[1064]: DHCPD PACKET: TP847: ACK Tx, chaddr 0003.a0fd.28a8, yiaddr 39.1.1.216
RP/0/RSP1/CPU0:Mar 29 13:23:29.985 : dhcpd[1064]: DHCPD PACKET: TP564: L3 Packet RX from addr = 55.1.1.2, port = 67, application len 300, vrf 0x60000000 (1610612736), tbl 0xe0000000 (3758096384)
RP/0/RSP1/CPU0:Mar 29 13:23:29.985 : dhcpd[1064]: DHCPD PACKET:  >dhcpd_iox_l3_conn_hlr: recv, src 55.1.1.2 dst 39.1.1.2, L3I GigabitEthernet0_1_0_0, Null output,
RP/0/RSP1/CPU0:Mar 29 13:23:29.986 : dhcpd[1064]: DHCPD PACKET: TP609: BOOTREPLY Rx, chaddr 0003.a0fd.28a8, interface GigabitEthernet0_1_0_0
RP/0/RSP1/CPU0:Mar 29 13:23:29.986 : dhcpd[1064]: DHCPD PACKET: TP259: BOOTREPLY Drop, Client not found, interface GigabitEthernet0_1_0_0, chaddr 0003.a0fd.28a8

 

This is the symptom described, the 2nd offer pre XR 4.1 is dropped by the dhcp relay agent in ios-xr. Post 4.1 we will forward all offers to the client.

 

Slide2.JPG

This picture illustrates that behavior whereby the 2nd offer is not forwarded. In this case the red line is dashed meaning it made it through. The solid green line is not forwarded and is dropped by the ASR9000 dhcp relay agent.

 

How RELAY, SNOOPING and PROXY scale, come together and its relation to BNG

 

Relay

if you have relay then we punt the dhcp packets from the client send them over to the dhcp process, convert them in a directed broadcast or unicast
towards the helper and really relay only the messages back and forth. Other then CPU there is no operational overhead. No state is maintained,
except for a few second “hole” between server and client for when the offer comes back.

Relay scale

There is no scale associated with this per-se other then cpu (ran off the LC CPU for phy/subinterfaces and RP for bundle/bvi).

 

Proxy

if you have proxy, the transaction is controlled by the dhcp process. so client talks to a9k. a9k talks to the dhcp server and it will maintain
a binding that expires on lease and absence of a renew.

 

Proxy Scale

The scale associated here is cpu for the message transaction. ARP interaction since dhcp proxy will install a arp forwarding adj also
and memory for the binding table itself.
the other scale factor is here how far the TX ADJ table can grow from ARP, we tested up to 128k, but if you have enough shared mem, this can go
further than that, depending on the LC CPU mem (phy sub/if) or RP (bvi/bundle) can increase its shmem window (this is really like a ram disk and can
dynamically grow as needed). the other part is the tx adj table of the npu, there is a defined limit of about 128k I thought it was (mem size in resolve)

Snooping

Then there is snooping; this is really the same as proxy, but in an L2 environment AND with the notion that the binding table built is installed in the hw
forwarding. this is capped at 32k per npu. Advantage of snooping is that with the binding in hw, we can do extra verification checks on ingress to make
sure the mac/ip/port matches what is in the table. for egress outbound the tx adj is used installed.

 

Bng and Proxy

Now when you do BNG with proxy, it combines the operation of proxy and snooping:
a binding created by proxy is now installed in hw also, with those checks mention in effect. this caps bng/proxy to 32k per npu.

Summary

For proxy, snoop, bng/proxy
the 128k limit is really a testing limit defined by:
- (lc/rp) cpu capabity
- (lc/rp) cpu memory
- tx adj table

for snoop and bng/proxy since it is hw installed it caps it at
32k per npu, which is a hw limit.

for those sessions not needing bng, but requiring dhcp services, to allow for better mem/cpu control and scale, it might make more sense to do relay
here, but there is nothing against use of proxy per-se.

 

 

Related Information

All broadcasts are handled on the RP, in the ASR9000 or any XR platform for that matter, we use LPTS (local packet transport services) similar as what IOS calls COPP (control plane policing)

The policing is on a per NPU basis. Look at Packet trace and troubleshooting in the ASR9000

for more information on packet troubleshooting in the ASR9000.

Comments
smailmilak
Level 4
Level 4

A license is not needed.

You should do some debugging for mcast routing. If it is working on ipsubscriber interface then it should work on regular L3 interface.

Check the IPv4 and mcast routing again.

tomislavcer
Level 1
Level 1

Hy Xander. 

I'm migrating from IOS to IOS XR. On IOS we use DHCP relay on VLAN interface in VRF. Idea was to migrate VLAN interfaces to BVI interfaces but VRF-aware DHCP relay is not supported on BVI interface, right ? What is proposed solution for that situation - to use dot1q tagged IP subinterfaces instead or is there some change about that feature ?

Thanx, Tom

xthuijs
Cisco Employee
Cisco Employee

hi tom,

you surely can use vrf's and dhcp with helpers in vrfs also on bvi.

for performance, pps, reasons, I would use l3 subinterfaces as much as possible, especially when you only have one EFP (l2transport subintf) per BD.

if you have 2 interfaces serving the same vlan and they share the same subnet, then yes you'd need a BD with a BVI.

cheers!

xander

tomislavcer
Level 1
Level 1

Thank you Xander.

Can you explain a little bit this performance influence when using BVI - what is the difference when using BVI or subif related to pps ? Did you mean related to DHCP relay or are you talking in general ?

regards

xthuijs
Cisco Employee
Cisco Employee

yeah sorry tom for not being clear about that: the performance of BVI is irrespective of dhcp, so in general.

the reason is that in BVI, the packet receives an L2 pass and an L3 pass, so sort of a recirculation in 7600 terminology. so while you can still do linerate, the minimum packet size to reach that is larger then using a plain l3 subinterface.

on a plain l3 subinterface, there is only a single pass which means more pps performance in the npu.

cheers!

xander

tomislavcer
Level 1
Level 1

OK Xander, thank you very much for explanation.

Last question :-) - do you have some numbers related to that or some testing experience - how much does that influence pps in real life ? For example, for same traffic with BVI's compared to traffic with subifs we have lets say 10% less pps or something like that ?

xthuijs
Cisco Employee
Cisco Employee

yeah it is a little more than that, it is about a 30-40% pps performance bvi costs.

this only means that if you want to reach line rate at 10G, instead of say 70 bytes minimal packet size to reach 10G linerate, you now need 130 bytes when using bvi.

cheers!

xander

aweaver7
Level 1
Level 1

Xander, I have a server admin telling me that he is getting requests from both the active and standby routers.  Do you know if anything changed in how dhcprequest are processed recently hw vs. sw using HSRP?

Aleksandar Vidakovic
Cisco Employee
Cisco Employee

I presume you are referring to a scenario where HSRP runs on the access-facing side. As DHCP requests are broadcast through the access network, both the active and standby HSRP peer will receive the DHCP request and act on it. If the topology is such as I presumed, have you considered MCLAG instead of HSRP?

If I got it wrong, can you please explain the topology better?

Regards,

Aleksandar

jielai001
Community Member

Hello Alexander,

Its a great post. I have a question about dhcpv6 relay and proxy.

I want to use ASR9000 as DHCPv6 relay agent. From what I have search from net, it seems I must use proxy if i want have pd assigned to subscribers.

will it be possible to use DHCPv6 relay instead of proxy, and yet subscribers and till able to get IPv6 PD?

xthuijs
Cisco Employee
Cisco Employee

ah thanks! and yes we do have that capability in XR 5.2.2

check here and some more config detail here

cheers!

xander

Hello

for multicast on ASR9001, when I do a "show pim interface", i have following state:

RP/0/RSP0/CPU0:BNG#sh pim interface
Mon Mar 14 22:27:13.614 NCT

PIM interfaces in VRF default
Address               Interface                     PIM  Nbr   Hello  DR    DR
                                                         Count Intvl  Prior
192.168.20.100          Bundle-Ether10.550            on   1     30     1     this system
192.168.20.100          Bundle-Ether10.502            on   1     30     1     this system
192.168.100.100        Bundle-Ether10.3644           on   1     30     1     this system
192.168.100.100        Bundle-Ether10.3658           on   1     30     1     this system
192.168.100.100        Bundle-Ether10.3672           on   1     30     1     this system
192.168.100.100        Bundle-Ether10.550.ip71       off  0     30     1     not elected
192.168.100.100        Bundle-Ether10.550.ip73       off  0     30     1     not elected
192.168.100.100        Bundle-Ether10.502.ip173      off  0     30     1     not elected
192.168.100.100        Bundle-Ether10.502.ip181      off  0     30     1     not elected
192.168.20.100         Bundle-Ether10.456.pppoe899   on   1     30     1     this system
192.168.20.100         Bundle-Ether10.456.pppoe902   on   1     30     1     this system
192.168.20.100         Bundle-Ether10.456.pppoe903   on   1     30     1     this system

On all sub-interface with PIM=on, multicast work well (PPPoE session)

on all sub-interface with PIM=off, multicast is not working (IPoE session).

So my silly question is : how to activate PIM to "on" on IPoE session?

Thanks for any issue of my problem.

Jean

xthuijs
Cisco Employee
Cisco Employee

hi jean,

yah the thing is we can replicate for pppoe subs, since they have their own p2p encapsulation.

for subscribers off a bundle-ether with ip directly, if 2 subs ask for the same stream, and I had mcast enabled, you'd technically get 2 copies of the same ether header

so for ipsubs, we generally do a mcast vlan and pull streams towards the dslam/gpon and we replicate from there to the actual subs.

this q has sort of little to do with dhcp proxy and forwarding :) it is possibly better to open a new discussion on the main page this so also more people can find it or easier to contribute, just a thought...

cheers!

xander

smailmilak
Level 4
Level 4

Hi Jean,

do you have this config?

multicast-routing
address-family ipv4
interface all enable

I have never tested mcast for IPoE but this should help.

We had a very similar question few weeks ago. Let me try find it.

Edit: It was you :D

Hi Smail

Yes, already done but no change on that. PIM state is still off.

It would be great if you found the solution.

Thanks

Jean

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Quick Links