02-26-2019 06:26 AM
We have performance issues between storage devices (IBM v9000) that terminate of the below 10GB interfaces.
This is part of our DR solution on this system. The slightest delay will place RISK on the business if this is not up to date
TE1/3/2 Redditch
TE1/3/4 Redditch
TE2/3/2 Redditch
TE2/3/4 Redditch
TE1/3/2 Woking
TE1/3/4 Woking
TE2/3/2 Woking
TE2/3/4 Woking
These devices\interfaces are in VRF's and the details are below:
o
WOK_VSS_WAN#
ip vrf HEM9A/HEMTST_v9000
rd 172.29.116.1:3
interface Vlan811
description HEM9A/HEMTST VRF Replication
mtu 9216
ip vrf forwarding HEM9A/HEMTST_v9000
ip address 172.18.251.85 255.255.255.248
ip flow monitor MONITOR1 input
interface TenGigabitEthernet1/3/2
description HEM9A v9000 Replication
switchport
switchport mode access
switchport access vlan 811
mtu 9216
interface TenGigabitEthernet2/3/2
description HEMTST v9000 Replication
switchport
switchport mode access
switchport access vlan 811
mtu 9216
RED_VSS_WAN#
ip vrf HEM9A/HEMTST_v9000
rd 172.29.116.1:3
interface Vlan811
description HEM9A/HEMTST VRF Replication
mtu 9216
ip vrf forwarding HEM9A/HEMTST_v9000
ip address 172.18.251.86 255.255.255.248
ip flow monitor MONITOR1 input
interface TenGigabitEthernet1/3/2
description HEM9A v9000 Replication
switchport
switchport mode access
switchport access vlan 811
mtu 9216
interface TenGigabitEthernet2/3/2
description HEMTST v9000 Replication
switchport
switchport mode access
switchport access vlan 811
mtu 9216
We have our 2x 10G DCI links between Data centres. These ports are Te1/1/5, Te2/1/5. These are part of an etherchannel (L2). From some reason we are seeing a maximum throughput of 1-1.2Gb, but never more than this. At this time we would also see high retransmits. However right now we have nothing more than 300Mb. The system is configured to use anywhere up to 10Gb.
Need to verify that there are no visible throughput issues either from the ports themselves for the whole switch itself (back plane etc), that could cause the system to slow this much - i.e throughput delays or issues.
We have other replication data that traverses the DCI links with no issues. However the new storage solution will send in a high volumes of data continuously and may also be impacted by millisecond issues. Something else to bear in mind as part of your investigations
02-26-2019 08:11 AM
At this moment i can only think of couple test required to test.
Is the link was working, and suddenly you see performance issue, or it was the issue from the beginning (just to understand the problem)
1. have you raised ticket with Service provider to test end to end link ?
2.Since it is L2 Etherchnnel, for testing you can un-bundle one test through put both side using Iperf, one at a time, so you know the lin work perfectly.
02-26-2019 08:29 AM
02-26-2019 08:32 AM
Thanks for the feedback
I have uploaded the show tech output if it will assist in troubleshooting the issue
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: