02-23-2015 02:57 PM - edited 03-07-2019 10:48 PM
We have purchased a Nexus 5548UP switch for our SAN environment. I've only configured it for jumbo frames and currently doing testing on the performance, as some say use jumbo frames while others say do not. My question is, and I know this is a loaded question but how do you guys go about configuring your SAN switches? What exactly are you configuring on it? Such as for performance tuning or the like. The reason I ask, is we have the switch and it's pretty much working, but I'd like to learn more about it and how to tune it and fully take advantage of the switch.
02-23-2015 03:05 PM
There is no need for performance tuning. Out of the box, the switches are capable of 10Gig line-rate speed.
HTH
02-26-2015 12:33 PM
Question for you Reza
I'm running iPerf with the following command iperf.exe -c <iperf server IP> -u -P 2 -i 1 -p 5001 -f m -b 15.0G -n 1000000000 -T 1 -w 128kb
I changed the window size little by little, but I started at 128kb. The speeds didn't change much, but here's the results:
Transfer Bandwidth
52.1 MBytes 437 Mbits/sec
52.1 MBytes 437 Mbits/sec
52.1 MBytes 437 Mbits/sec
53.0 MBytes 445 Mbits/sec
51.6 MBytes 433 Mbits/sec
52.1 MBytes 437 Mbits/sec
Even with overhead, that seems to be a little slow. I was anticipating it being much faster on that switch. Any ideas? I've asked this question on another forum and nobody can provide me the reason why it's so slow, and I shouldn't say "so slow" as that's pretty fast but just expecting more for the 5548 switch.
02-26-2015 12:43 PM
So you are using iperf to transfer between 2 PCs/laptops?
I am not saying this is the case for sure, but it could be a limitation on how fast the PCs/laptops can read/wright.
Also what is the switch config for these tests?
HTH
02-26-2015 02:26 PM
That is correct, between two Dell Optiplex workstations. We also tested between the server that will go into prod and an Optiplex, but once again the Optiplex could be the limiting factor. You think that could be it? Here's my config, so you know how it's setup right now.
version 5.2(1)N1(7)
hostname N5K
feature telnet
feature lldp
feature vtp
username admin password 5 $1$oZ1234567898765432123456/ role network-admin
banner motd #Nexus 5000 Switch
#
ip domain-lookup
policy-map type network-qos jumbo
class type network-qos class-default
mtu 9216
multicast-optimize
system qos
service-policy type network-qos jumbo
snmp-server user admin network-admin auth md5 0x422842c9d320064123456789fc703d20
priv d3264f23456789000c703d20 localizedkey
vrf context management
port-profile default max-ports 512
interface Ethernet1/1
flowcontrol receive on
flowcontrol send on
interface Ethernet1/2
interface Ethernet1/3
interface Ethernet1/4
interface Ethernet1/5
flowcontrol receive on
flowcontrol send on
interface Ethernet1/6
interface Ethernet1/7
interface Ethernet1/8
interface Ethernet1/9
flowcontrol receive on
flowcontrol send on
interface Ethernet1/10
interface Ethernet1/11
interface Ethernet1/12
interface Ethernet1/13
flowcontrol receive on
flowcontrol send on
interface Ethernet1/14
interface Ethernet1/15
interface Ethernet1/16
interface Ethernet1/17
interface Ethernet1/18
interface Ethernet1/19
interface Ethernet1/20
interface Ethernet1/21
interface Ethernet1/22
interface Ethernet1/23
interface Ethernet1/24
interface Ethernet1/25
interface Ethernet1/26
interface Ethernet1/27
interface Ethernet1/28
interface Ethernet1/29
interface Ethernet1/30
interface Ethernet1/31
interface Ethernet1/32
interface mgmt0
ip address 11.11.11.11/24
line console
exec-timeout 720
line vty
boot kickstart bootflash:/n5000-uk9-kickstart.5.2.1.N1.7.bin
boot system bootflash:/n5000-uk9.5.2.1.N1.7.bin
02-26-2015 02:31 PM
Can you turn off
flowcontrol receive on
flowcontrol send on
and try again?
02-26-2015 05:26 PM
With both flowcontrol options off, I'm getting a consistent 525Mbit/sec throughput.
What exactly does the flowcontrol do? That was recommended by another colleague to enable.
02-26-2015 06:47 PM
No need to have flow control on unless there are issues with pause frames from end devices i.e servers, VM, storage, etc..
HTH
02-27-2015 10:24 AM
Got it! Good to know.
So in your opinion, are these speeds acceptable? I know it also comes down to the 10GB NICs we are using. We are currently using Intel X520 DA2 network adapters across the board.
Thanks for the advice and help, you are always so helpful on the forums Reza!
02-28-2015 09:52 AM
Evan,
Glad to help and you are correct. It depends on the NIC and also the PC/laptop. I tested the same thing with 6500 1Gig interfaces and a couple of laptops with Gig NICs. At about 600M, I could not move the mouse anymore and the laptop would freeze.
Please rate and mark the post as answered, so others can benefit from it.
HTH
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide