cancel
Showing results for 
Search instead for 
Did you mean: 
cancel

Troubleshooting ARP on ASR9000 Routers

16312
Views
15
Helpful
8
Comments

 

ASR9000/XR: Troublesshooting ARP on ASR9000 Routers


Introduction

This document provides an overview of ARP operation and troubleshooting on IOS XR platforms, taking the ASR9000 as example.


ARP Architecture

IOS XR is a fully distributed operating system, designed for a two-stage forwarding operation. Two stage forwarding allows for a clear separation of ingress and egress feature processing. As the imposition of a Layer 2 header on the packet is an egress operation, only the line card that hosts the egress interface needs to know the Layer 2 encapsulation for packets that have to be forwarded out of that interface. As a consequence, ARP and adjacency tables are local to a line card.

L3 control plane overview

The special cases are Bundle-Ethernet interfaces and Bridged Virtual Interfaces (BVI).

In case of Layer 3 Bundle-Ethernet interfaces ARP and adjacency information is local to all line cards that host the bundle members. That means that ARP information must be kept in sync on these line cards.

In case of a BVI, all Layer 3 processing must be performed on the ingress line card. That means that ARP information must be kept in sync between all line cards in the system.


ARP Data Plane

Incoming/Outgoing ARP Packets Path

Incoming ARP Packet Path

Incoming ARP packets pass through the following processing steps:

  1. Ingress NP classifies the packet to an interface and detects that it's an ARP packet.
  2. Packet is subjected to Local Packet Transport Services (LPTS) processing:
    1. the packet goes through the dedicated policer for ARP packets (all ARP is subject to this policer, the NP ucode doesn't validate the request and would punt ARP requests directly to the LC CPU) and
    2. if not dropped by the policer, packet is punted to the local line card CPU
  3. Packet is received by SPP and passed on to the ARP process via spio.

Outgoing ARP packets are injected from RP CPU to the NP that hosts the egress interface. In case of bundles, RP performs the load-balancing hash calculation and injects the packet to selected bundle member. In case of BVI, RP sends the APR packet toward np0 of a LC dedicated to handle packets injected into BVI.

Tracing the Path of Incoming ARP Packets

Check the NP counters to see the number/rate of ARP packets punted to LC CPU and the number/rate of ARP packets dropped by the LPTS ARP policer. 

RP/0/RSP0/CPU0:ASR9006-H#sh controllers np counters all location 0/1/CPU0 | i "for NP|ARP"
Show global stats counters for NP0, revision v2
<no arp is processed for NP0> Show global stats counters for NP1, revision v2 776 ARP 71670147 500 777 ARP_EXCD 125143 0 Show global stats counters for NP2, revision v2 776 ARP 4 0 Show global stats counters for NP3, revision v2 776 ARP 19 0 Show global stats counters for NP4, revision v2 Show global stats counters for NP5, revision v2 776 ARP 130760261 929 777 ARP_EXCD 613523651 4422 Show global stats counters for NP6, revision v2 Show global stats counters for NP7, revision v2

In above example NP5 is receiving excessive ARP packets. We can determine that based on the fact that the punt policer "EXCD" count is showing a high rate in the second column (first column is the packet count, second column is the computed rate between 2 successive show commands and the time lapse between the running of the show commands".

Check which interfaces are hosted by this NP:

RP/0/RSP0/CPU0:ASR9006-H#sh controllers np ports all location 0/1/CPU0
		Node: 0/1/CPU0:
----------------------------------------------------------------
NP Bridge Fia                       Ports
-- ------ --- ---------------------------------------------------
[..output omitted...]
5  --     2   TenGigE0/1/0/15, TenGigE0/1/0/16, TenGigE0/1/0/17

LPTS ARP policer rate can be adjusted per line card. You can configured different rates per line card. All NPs on the line card will have the same rate configured.

!
lpts punt police location 0/1/CPU0 protocol arp rate 400
!

It is important to undertand that this rate per linecard is programmmed PER NPU. That means that if you have say a 24x10 typhoon linecard, which 8 NPU's, the possible rate is 8x400 pps towards the LC CPU.

 

The next step is to check the SPP statistics for ARP process. To do that you need to find first the ARP job ID:

RP/0/RSP0/CPU0:ASR9006-H#sh processes arp location 0/1/CPU0
Fri Jan 29 06:43:27.532 MyZone
                  Job Id: 115
                     PID: 520329
         Executable path: /iosxr-fwding-4.3.2/bin/arp

Then check the SPP stats for that job. Please keep in mind that ARP process may have a different job ID on different line cards.

RP/0/RSP0/CPU0:ASR9006-H#show spp client punt status location 0/1/CPU0
Fri Jan 29 06:42:16.482 MyZone
Clients
=======
[..output omitted...]
ASR9K SPIO client stream ID 32, JID 115
  Punt Queue key 0x02000020
    20703822 messages sent, 0 dropped, 9 on queue
    *** No messages read since last pulse - client may not be responding ***
    SPP last pulsed at      06:42:15.034 Jan 29 16 MyZone
    Pulse last delivered at 06:42:15.033 Jan 29 16 MyZone
    Message last read at    06:42:15.033 Jan 29 16 MyZone
    Pulse retries succeeded after error: 0
    Pulse retries failed after error:    0
    Retry attempts for current pulse:    0

And finally check the ARP traffic statistics and cache (i.e. ARP table) status:

RP/0/RSP0/CPU0:ASR9006-H#show arp traffic location 0/1/CPU0
Fri Jan 29 06:47:49.574 MyZone

ARP statistics:
  Recv: 68472610 requests, 2244197 replies
  Sent: 600365116 requests, 68493360 replies (0 proxy, 0 local proxy, 22552 gratuitous)
  Resolve requests rcvd: 119846142
  Resolve requests dropped: 1000
  Errors: 0 out of memory, 0 no buffers

ARP cache:
  Total ARP entries in cache: 211801
  Dynamic: 65002, Interface: 31, Standby: 0
  Alias: 0,   Static: 0,    DHCP: 0

  IP Packet drop count for node 0/1/CPU0: 119844149

  Total ARP-IDB:64

The above command output can be narrowed down to display the stats per VRF or even for interface:

RP/0/RSP0/CPU0:ASR9006-H#sh arp vrf VPN_IPTV traffic BVI1051 location 0/1/CPU0
Fri Jan 29 06:50:44.765 MyZone

ARP statistics:
  Recv: 68389227 requests, 1434757 replies
  Sent: 600977715 requests, 68399409 replies (0 proxy, 0 local proxy, 12527 gratuitous)
  Resolve requests rcvd: 120004816
  Resolve requests dropped: 1000
  Errors: 0 out of memory, 0 no buffers

ARP cache:
  Total ARP entries in cache: 196216
  Dynamic: 65001, Interface: 11, Standby: 0
  Alias: 0,   Static: 0,    DHCP: 0

  IP Packet drop count for BVI1051: 120002816

Tracing the Path of Outgoing ARP Packets

In case of physical (sub)interfaces the RP will inject the packet towards the NP that hosts the interface.

In case of BVI run the following command to determine the NP to which the ARP packets will be injected:

RP/0/RSP0/CPU0:ASR9006-H#sh adjacency BVI-fia
Fri Jan 29 19:10:47.819 CET
Service card for BVI interfaces on node 0/RSP0/CPU0
                MPLS packets: Slot 0/0/CPU0 VQI 0x58
                Non-MPLS packets: Slot 0/0/CPU0 VQI 0x58


                        ----- Service Card List -----

                        0/RSP0/CPU0                     VQI: Not Applicable
                        0/RSP1/CPU0                     VQI: Not Applicable
                        0/0/CPU0                        VQI: 0x58
                        0/FT0/SP                        VQI: Not Applicable

Data Packets That Require ARP Resolution

When the line card that performs the egress IPv4 processing receives a packet for an IPv4 address for which there is no adjacency information, the packet must be punted to slow path for ARP resolution.

In case of L3 physical or L3 bundle (sub)interfaces, packets received from the fabric are punted for ARP resolution if there is no adjacency for the given prefix. This is because the egress line card performs the L3 egress processing.

In case of BVI, it's the ingress LC that performs the L3 egress processing. In that case the packet received from the PHY is punted for ARP resolution if there is no adjacency for the given prefix.

ARP: data packet that require ARP resolution

Such packet passes through following processing steps:

  1. Packet is subjected to Local Packet Transport Services (LPTS) processing:
    1. the packet goes through the dedicated policer for unresolved adjacencies and
    2. if not dropped by the policer, packet is punted to the local line card CPU
  2. The packet is received by NetIO. NetIO has a dedicated queue for packets that require ARP resolution.
  3. NetIO triggers an ARP resolution request for the given IPv4 address.
  4. The original IPv4 packet sits in NetIO queue until ARP resolution is completed and adjacency created, after which it's injected towards the NP.

Check the PUNT_ADJ and PUNT_ADJ_EXCD  NP counters and the LPTS static policer for PUNT_ADJ:

RP/0/RSP0/CPU0:ASR9006-H#sh controllers np counters np0 location 0/1/CPU0 | i "FrameValue|PUNT_ADJ"
Offset Counter FrameValue Rate (pps)
850 PUNT_ADJ 17459 89
RP/0/RSP0/CPU0:ASR9006-H#sh lpts pifib hardware static-police location 0/1/CPU0 | i "Dropped|PUNT_ADJ"
Punt Reason SID Flow Rate Burst Rate Accepted Dropped Destination
PUNT_ADJ NETIO_LOW 100 100 29828 0 Local
RP/0/RSP0/CPU0:ASR9006-H#

 

Next check the NetIO statistics and look into the row related to ARP queue.

RP/0/RSP0/CPU0:ASR9006-H#sh netio clients location 0/1/CPU0
Fri Jan 29 07:24:09.054 MyZone

Counters                  Errors/Total
----------------------------------------
Output                        0/286038
Input                        0/139239231
Puntback                      0/0
Jump                          0/0
Driver Output                 0/139400

XIPC queues              Dropped/Queued    Cur/High/Max
------------------------------------------------------------
OutputL                       0/146665        0/1/7000
OutputH                       0/0             0/0/3500
Puntback                      0/0             0/0/7000

                   Input              Punt         XIPC InputQ      XIPC PuntQ
ClientID         Drop/Total        Drop/Total      Cur/High/Max    Cur/High/Max
--------------------------------------------------------------------------------
ipv6_icmp           0/0               0/0             0/0/1000        0/0/1000
icmp                0/0               0/363608        0/0/1000        0/2/1000
clns              L 0/0               0/0           L 0/0/1000        0/0/0
                  H 0/0                             H 0/0/1000
ipv6_io             0/0               0/0             0/0/1000        0/0/1000
ipv6_nd             0/0               0/0             0/0/1500        0/0/1000
l2snoop             0/0               0/0             0/0/1000        0/0/0
ether_sock        0/3469148416        0/0
tp_oam              0/0               0/0             0/0/1000        0/0/1000
icmpv6_unreach_jump        0/0               0/0
udp               L 0/0               0/0           L 0/0/4000        0/0/0
                  H 0/0                             H 0/0/4000
arp                 0/0        17025280/138794437      0/0/1000    293/1000/1000
mpls_io             0/0               0/0             0/0/1000        0/0/1000
ipv4                0/0               0/0             0/0/1000        0/0/1000
ipv6                0/0               0/0             0/0/1000        0/0/1000

Key:
  L = queue for lower priority packets
  H = queue for higher priority packets

If you want to look into the contents of packets punted for ARP resolution check the ARP job ID on the line card in question and use it in this command:

RP/0/RSP0/CPU0:ASR9006-H#sh packet-memory job 115 data 128 location 0/1/CPU0
Fri Jan 29 07:26:03.500 MyZone
Pakhandle   Job Id Ifinput      Ifoutput     dll/pc       Size  NW Offset
0xdd8e18f8  115    TenGigE0/1/0/17BVI1051      0x4e78bd78     60   14

  0000000: 00082C00 1058C800 00000000 04DA4500 002E0000
  0000020: 00003F3D 8BFA1600 00160D00 CC830001 02030405
  0000040: 06070809 0A0B0C0D 0E0F1011 12131415 16171819
-----------------------------------------------------------------

IPv4 packet header starts at offset 14 (0xC). You can use any packet decoder application to decode it offline.

If the NetIO ARP queue is getting overloaded, the following error is reported in the syslog:

LC/0/1/CPU0:Jan 27 06:38:12.445 : netio[270]: %PKT_INFRA-PQMON-6-QUEUE_DROP : Taildrop on XIPC queue 2 owned by arp (jid=115)

 


ARP Synchronisation

As explained in the introduction to this document, two types of virtual interfaces require ARP tone synchronised across line cards.

ARP entries associated with a Bundle-Ethernet (sub)interface are synchronised between line cards that host members of the bundle.  

ARP entries associated with a BVI interface are synchronised between all line cards in the system.

Interface type ARP entries location Synchronisation
Layer 3 physical interface or sub-interface Local to the line card No
Layer 3 Bundle-Ethernet interface or sub-interface Local to all line cards that host members of the bundle Between line cards that host the bundle member
Bridged Virtual Interface (BVI) All line cards Between all line cards in the system

 

 

If any ARP synchronisation attempt fails, a message like this one will appear in the ARP trace. Intermittent sync failures may happen during burst of ARP resolution requests. Persistent GSP failures are indicating that the deployed ARP scale is likely too high.

RP/0/RSP0/CPU0:ASR9006-H#sh arp trace location 0/1/CPU0 | i GSP send failed
Fri Jan 29 08:29:24.420 MyZone
Jan 28 00:18:59.839 ipv4_arp/slow 0/1/CPU0 104# t1  ERR:   GSP send failed for 128 arp entries, message ID: 5140529: No space left on device

You can also check the GSP stats for ARP job to see the synchronisation traffic info:

RP/0/RSP0/CPU0:ASR9006-H#sh gsp stats client job 115 brief location 0/1/CPU0
Fri Jan 29 08:03:13.265 MyZone
Group stats collected during last 1156962 seconds:

115   arp                            Lookup        2     Create        7
                                       Sent 13863998    ms/call        0
                                        Rcv 13870904     BError      160
                                 RcvFragMsg        7   RcvFrags     3549
                                    BEagain        0      Etime        0
                                      Calls 13878446       Prod 13878340
                                        EOK 13878340 Stat-OK/ERR 13856824/0
                                      Bytes 27500275     Pulsed 13815236
                                    NbError      106   NbEagain        0
                                     MTries    57170     PTries        0
                                     PulErr        0 RcvFragErr        0
------------------------------------------------------------------
                                     Lookup        2     Create        7
                                       Sent 13863998    ms/call        0
                                        Rcv 13870904      Error      160
                                  Frag-Msgs        7      Frags     3549
                                     Eagain        0      Etime        0
                                      Calls 13878446       Prod 13878340
                                        EOK 13878340 Stat-OK/ERR 13856824/0
                                      Bytes 27500275     Pulsed 13815236
                                      Error      106     Eagain        0
                                     MTries    57170     PTries        0
                                     PulErr        0

High Scale ARP Deployment Considerations

There is no hard limit imposed on the ARP table size. The ARP table can grow until the ARP process reaches the dynamic memory limit.

RP/0/RSP0/CPU0:ASR9006-H#sh processes memory detail location 0/1/CPU0
Fri Jan 29 08:06:54.272 MyZone
JID    Text       Data       Stack      Dynamic    Dyn-Limit  Shm-Tot    Phy-Tot    Process
------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- -------
115          264K       516K        68K        45M       300M        21M        45M  arp

ARP scale tested at Cisco in all deployment scenarios is 128k entries per LC. It doesn't mean you can't deploy a higher ARP scale, but it does mean that dedicated testing of your deployment scenario is highly recommended, especially if your design solution requires ARP synchronisation. 

 

If the ARP scale is high, the execution of the "show arp" command may slow down the ARP process and prevent it from timely responding to ARP requests or ARP resolution request. To work around that, you can check the adjacency entries instead of the ARP entries. 

Remember that a show command with a pipe include (e.g. show arp | i 0000.1111.2222) would still process the whole table and only *display* the relevant details as determined by the regex. In other words, pipe include doesn't improve the scalability of the command, but omits certain uninteresting data not matching the regex.

 

ARP command Equivalent adjacency command
sh arp traffic location all sh adjacency summary location all
sh arp <interface> location sh adjacency <interface> detail location <location>

 

RP/0/RSP0/CPU0:ASR9006-H#sh adjacency summary location all
Fri Jan 29 10:28:48.717 MyZone
-------------------------------------------------------------------------------
1/3/CPU0
-------------------------------------------------------------------------------

Adjacency table (version 3380891) has 182716 adjacencies:
    182595 complete adjacencies
    121 incomplete adjacencies
    382694 deleted adjacencies in quarantine list
    182390 adjacencies of type IPv4
        182390 complete adjacencies of type IPv4
          0 incomplete adjacencies of type IPv4
        382694 deleted adjacencies of type IPv4 in quarantine list
          0 interface adjacencies of type IPv4
          6 multicast adjacencies of type IPv4

What does this tell you:

  • High version number indicates that ARP traffic is high, causing frequent changes in adjacency tables.
  • Incomplete adjacencies indicate traffic received from northbound interfaces to non-existing or disconnected devices in the attached network.
  • Deleted adjacencies indicate that ARP entries have timed out and we couldn't refresh them because no one responded to our ARP request.

If you want to check the MAC address associated with the IPv4 address, check the details of the adjacency:

RP/0/RSP0/CPU0:ASR9006-H#sh adjacency BVI1051 detail location 1/3/CPU0
Fri Jan 29 10:33:28.527 MyZone
Interface                   Address                  Version  Refcount Protocol
BV1051                      11.0.253.150             2437180         1(    0) ipv4
                           c4229400fd930000011101110800
                           mtu: 9180, flags 1 0 0
                           0 packets, 0 bytes

The first 6 octets in the adjacency dump are the destination MAC address, i.e. the MAC address of the device that owns the displayed IPv4 address.

 

In high scale ARP deployments with HSRP/VRRP, you may want to synchronise the ARP entries between the HSRP/VRRP peers. Starting with XR release 5.3.2, you can use the ARP Geographical redundancy (ARP sync) feature for this purpose:

RP/0/RSP0/CPU0:ASR9006-H(config)#arp redundancy group 1 ?
  interface-list    List of Interfaces for this Group
  peer              Peer Address for this Group
  source-interface  Source interface for Redundancy Peer Communication
  

In BNG deployment scenario with secondary IPv4 addresses on subscriber access interfaces (or Loopback associated with them), ARP table will grow quickly. Good thing about subscriber interfaces is that the MAC<->IPv4 mapping is known through either DHCP or PPPoE, so if you are running IOS XR release 5.3.3 or later, you can prevent the creation of ARP entries associated with subscriber interfaces by configuring the below command.

This command doesn't impair the ARP operation (query, reply, proxy), it only prevents the creation of ARP cache entries. Commit of this command to configuration doesn't clear the existing ARP entries. You have to clear the subscriber sessions before commit.

!
subscriber arp scale-mode-enable
!  

 

 

 

As a recap, in high scale environment we recommend the following:

  • tighten LPTS policer rate. A ball-park number is 1000 per LC. For example, if the LC has 4xNPs, set the LPTS ARP policer to 250pps.
  • monitor ARP counters at NP
  • monitor NetIO ARP resolution queue
  • monitor adjacencies summary info
  • If using BVI, look for GSP failures in ARP traces
  • In BNG deployment on IOS XR release 5.3.3 or later configure "subscriber arp scale-mode-enable"

 

 


Related Information

For more info on route scale, refer to this excellent document:

ASR9000/XR Understanding Route scale.

 

Comments
steinarrimestad
Beginner

Hi Aleksander.

Thanks for this detailed writeup

Do you have any SNMP OIDs that can be used for this monitoring you recommend such as

- netio arp clients
- xipc InputQ and PuntQ cur/high/max and dropped for arp
- drop counters for ARP and ARP_EXCD
- arp adjacency summary information

Regards,

Aleksandar Vidakovic
Cisco Employee

hi Steinar,

ipNetToPhysicalTable can be used to poll the ARP table content. In the high ARP scale we don't actually recommend that, but with the ARP scale enhancement in BNG that you are using the ARP scale won't be that high any more.

There is no SNMP OID to show the summary information. Would it help if we would evaluate the implementation of XML schema?

Regards,

Aleksandar

Alexey Stytsenko
Beginner

Hi Aleksandar,

All ports belonging to np3 of A9K-8X100GE-L-SE have same issue with ARP (and all other packets also but lets consider ARP). I see on the port (TenGigE0/1/0/6/0) ARP requests from connected device to ip address assigned to this port itself:

RP/0/RP0/CPU0:ASR9k-1#monitor np counter DBG_RSV_EP_L_RSV_ING_PUNT  np3 location 0/1/CPU0
Thu May 12 23:09:04.414 GMT

Note: Original packet is not dropped.


Warning: A mandatory NP reset will be done after monitor to clean up.
         This will cause ~150ms traffic outage. Links will stay Up.
 Proceed y/n [y] > y
 Monitor DBG_RSV_EP_L_RSV_ING_PUNT on NP3 ... (Ctrl-C to quit)

Thu May 12 23:09:06 2016 -- NP3 packet

 From TenGigE0_1_0_6_0: 60 byte packet
0000: ff ff ff ff ff ff 00 fe c8 34 85 3f 08 06 00 01   .......~H4.?....
0010: 08 00 06 04 00 01 00 fe c8 34 85 3f c0 a8 b1 23   .......~H4.?@(1#
0020: 00 00 00 00 00 00 c0 a8 b1 22 00 00 00 00 00 00   ......@(1"......
0030: 00 00 00 00 00 00 00 00 00 00 00 00               ............    

                (count 1 of 1)


Cleanup: Confirm NP reset now (~150ms traffic outage).
 Ready? [y] > y

But device doesnt send responces. I tried command adduced in this article to check number of ARP packets received on interface and it shows 0, despite hundreds of ARP requests received on the port

RP/0/RP0/CPU0:ASR9k-1#show arp traffic tenGigE 0/1/0/6/0 location 0/1/CPU0
Thu May 12 23:12:48.147 GMT

ARP statistics:
  Recv: 0 requests, 0 replies
  Sent: 0 requests, 4 replies (0 proxy, 0 local proxy, 4 gratuitous)
  Resolve requests rcvd: 0
  Resolve requests dropped: 0
  Errors: 0 out of memory, 0 no buffers, 0 out of sunbet

ARP cache:
  Total ARP entries in cache: 1
  Dynamic: 0, Interface: 1, Standby: 0
  Alias: 0,   Static: 0,    DHCP: 0

  IP Packet drop count for TenGigE0_1_0_6_0: 0

Port itself doesn't have any errors/drops. NP counters hasn't any drop counters either.

Ports belonging to other NP on same line card hasn't this issue. Can you please suggest where to look next and how? NP3 - LC-CPU connection?

Aleksandar Vidakovic
Cisco Employee

hi Alexey,

do you see the ARP punt counter increment on NP3 of 0/1?

If the counter doesn't increment and if the router is running 5.3.3, run the "sh controller np capture" to see why are these packets dropped by the NP. There is another doc on supportforums that explains how to use this command.

Did you open a TAC SR already?

regards,

Aleksandar

Alexey Stytsenko
Beginner

Hi Aleksandar,

Thanks for answer! No, there are no ARP punt counters  and DROP counters on NP3, I mean they are zero. And code, unfortunately, is 5.3.1.

Yes, SR opened.

memphis003
Beginner

Hi Aleksandar,

In case we have L2 connected subscribers and both dhcp and unknown source FSOLs are configured under the access interface, can "subscriber arp scale-mode-enable" prevent subscriber ARP entry creation (in case of DHCP binding absence).

Also are the "arp learning disable" & "subscriber arp scale-mode-enable" mutually exclusive configuration cases? Thanks in advance.

Tom

Aleksandar Vidakovic
Cisco Employee

hi Tom,

when subscriber interfaces are IPv4 unnumbered, they inherit all attributes from the associated Loopback. As a consequence we create a local ARP entry for every secondary IPv4 address configured on the Loopback interface. With 10k subscribers and 19 secondary IPv4 addresses on Loopback, this explodes the local ARP entries to 200k.

The "subscriber arp scale-mode-enable" command prevents the creation of any ARP entry against subscriber interfaces. The "arp learning disable" command prevents the learning, but doesn't prevent the local ARP entries.

I hope this answers the question.

/Aleksandar

Serge Krier
Cisco Employee

In addition, in recent version (5.3.3) the following command is useful to display the arp resolution history.

RP/0/RSP0/CPU0:ASR9K#sh arp resolution history location 0/1/cPU0
Wed Nov 9 13:03:55.026 CEST
Timestamp Interface Address Event
--------- --------- ------- -----
Nov 8 18:08:41.742 UNKNOWN intf 0x00000420 10.0.28.1 Res Ok G ARP 5087.893c.8f20
Nov 8 18:09:13.521 Gi0/1/1/2 10.0.48.70 Res Req
Nov 8 18:09:18.061 Gi0/1/1/2 10.0.48.70 Res Fail
Nov 8 18:09:19.290 Gi0/1/1/2 10.0.48.70 Res Req #2
Nov 8 18:09:23.901 Gi0/1/1/2 10.0.48.70 Res Fail
Nov 8 18:09:25.342 Gi0/1/1/2 10.0.48.70 Res Req #3
Nov 8 18:09:29.949 Gi0/1/1/2 10.0.48.70 Res Fail
Nov 8 18:09:31.464 Gi0/1/1/2 10.0.48.70 Res Req #4
Nov 8 18:09:36.061 Gi0/1/1/2 10.0.48.70 Res Fail
Nov 8 20:14:24.881 UNKNOWN intf 0x000fc4a0 10.0.28.1 Res Req
Nov 8 20:14:29.331 UNKNOWN intf 0x000fc4a0 10.0.28.1 Res Fail
Nov 8 20:14:29.886 UNKNOWN intf 0x000fc4a0 10.0.28.1 Res Req #2
Nov 8 20:14:34.401 UNKNOWN intf 0x000fc4a0 10.0.28.1 Res Fail
Nov 8 20:14:34.891 UNKNOWN intf 0x000fc4a0 10.0.28.1 Res Req #3
Nov 8 20:14:39.434 UNKNOWN intf 0x000fc4a0 10.0.28.1 Res Fail
Nov 8 20:14:39.896 UNKNOWN intf 0x000fc4a0 10.0.28.1 Res Req #4
Nov 8 20:14:44.647 UNKNOWN intf 0x000fc4a0 10.0.28.1 Res Fail
Nov 8 20:14:44.901 UNKNOWN intf 0x000fc4a0 10.0.28.1 Res Req #5
Nov 8 20:14:48.616 UNKNOWN intf 0x000fc4a0 10.0.28.1 Res Ok Reply 5087.893c.8f20
Content for Community-Ad