Detection timer for Async mode : negotiated Rx* received multiplier,
Detection timer for Demand mode : Configured Tx * configured multiplier
above formula mentioned in RFC
what if the BFD echo is enabled ?
lets say configured slow timer and given full detection control to echo packet,
I could see only minimum echo interval timer filed on packet as well as in cmd ( not defined as tx/rx)
so if I'm sending echo in some frequency at what basis detection time is calculated?
R1(config-if)#bfd interval 400 minrx 400 multiplier 5
R1(config-if)#bfd echo interval 1500
R1(config)#bfd slow-timer 30000
R2(config-if)#bfd interval 500 minrx 300 multiplier 7
R2(config-if)#bfd echo interval 1000
R2(config)#bfd slow-timer 30000
detection time calculation is based on the value of "Detect Mult" variable received from the remote system and the negotiated transmit interval.
Detection Time = Detection_Multiplier x Negotiated_Transmit_Interval
If the Detection Time is passed without receiving a control packet, the session is declared to be down.
Local negotiated async tx interval: 2 s
Remote negotiated async tx interval: 2 s
Desired echo tx interval: 100 ms, local negotiated echo tx interval: 100 ms
Echo detection time: 300 ms(100 ms*3), async detection time: 6 s(2 s*3)
in this case echo detection time is achieved with 100min rx interval + multiplier x3.
Thank you for the reply.
As per your statement detection timer when echo mode enabled is "received multiplier * negotiated tx"
If possible can you paste the detailed output from cisco device?
I have observed different behavior on other vendor device, which was "locally configured echo interval * locally configured multiplier" .
hi @sivam siva ,
let me clarify a bit.
With echo mode you always have 2 BFD packets: echo packets and control packets.
With respect to Cisco devices, in particular ASR9k, when you deal with echo mode for local sourced echo packets, locally configured multiplier * local interval will be used to calculate the detection time.
Instead locally configured multiplier will be used by remote peer to calculate detection time for BFD Control packets sent by local peer.
By default control interval is set up to 2 seconds when you enable echo-mode.
However, for echo-packets, detection time may be based on the upper value negotiated between Local router (configured minimum-interval or default value) and Remote router values if remote counterpart uses higher value than the local one.
For example let's say we have two nodes configured with 250ms interval + multiplier x 4 such as:
-1 IOS XR device
-2 IOS XE device
If you increase value of IOS XE device to 800ms interval you will see the following output under IOS XR:
RP/0/RP0/CPU0:ios#sh bfd session
Wed Nov 11 23:31:47.408 UTC
Interface Dest Addr Local det time(int*mult) State
Echo Async H/W NPU
------------------- --------------- ---------------- ---------------- ----------
Gi0/0/0/2 10.0.0.2 3200ms(800ms*4) 8s(2s*4) UP
Among 250ms and 800ms the larger value is preferred.
While if you decrease interval of IOS XR to 100ms and set 50ms interval under IOS XE, you will still see XE node uses echo function of 50ms interval for bfd packets while IOS XR will keep using the local value (100ms).
In the case mentioned in my previous post 100ms rx interval was negotiated per the above statements and rules.
So, basically, echo detection time is the result of negotiated echo tx interval and the multiplier configured locally, assuming we are speaking about ASR9k (not all negotiations lead to the use of local configured tx interval). These concepts slightly differ if we refer to IOS/IOS XE platforms, as we saw above too.
Hope i was able to clarify this point.