cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5187
Views
1
Helpful
4
Replies

iSCSI performance

ryan.lambert
Level 1
Level 1

Hi everyone,

We've been working through and managed to get our 3 vSphere servers onto the N1KV. What our server team has noticed is that if we restore a 20 gig image of a server from iSCSI when iSCSI traffic is on one NIC (vSwitch was the test for this, not N1KV) that a restore will take 2 minutes, as opposed to 7 minutes when they are teamed together on the vSwitch and/or port-channeled with mac-pinning (N1KV). As an aside, it doesn't matter which of the two NICs we use, as long as it is just one. Two NICs together is where it gets 'slow'.

My network topology is pretty simple: Each server has two NICs, each NIC plugged into a separate 5020 that have a 10g b2b between them.

iSCSI and the vSphere hosts both live on the same pair of Nexus 5020s, and there's nothing else going on besides this. I've done all the basic stuff like check uplinks for errors, etc. and I'm not exactly sure where to go with performance tweaking here on the 1K.

Any tips? I'm going to have redundant uplinks to my 5020s any way we slice it, and I know we're not saturating the 10g uplinks during this period... so, I imagine this problem will persist even if I created a dedicated pair of uplinks for iSCSI traffic only. I feel like I am going to be wasting cycles going that route.

Thanks in advance.

Edit, also FWIW, here's a snapshot of my uplink config + my iSCSI port-profile

port-profile type ethernet Uplink
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 91-96
  channel-group auto mode on mac-pinning
  no shutdown
  system vlan 91-93,95-96
  state enabled

port-profile type vethernet iSCSI-VLAN93
  vmware port-group
  switchport mode access
  switchport access vlan 93
  capability iscsi-multipath
  no shutdown
  system vlan 93
  state enabled

4 Replies 4

ryan.lambert
Level 1
Level 1

As part of this I was attempting to make some MTU adjustments on my 1KV and our Nexus 5ks/EMC. Everything is at 1500 right now, and we wanted to see what type of performance increase we could gain by jacking up to 9k.

I was reading the following doc:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_2/interface/configuration/guide/n1000v_if_2basic.html#wp1221750

But... um,

N1KV-VSM1(config)# int eth5/8
N1KV-VSM1(config-if)# mtu 9000
                        ^
% invalid command detected at '^' marker.
N1KV-VSM1(config-if)# mtu ?
                        ^
% invalid command detected at '^' marker.

Is there a different way I should be doing this? The corresponding port-channel, perhaps? Option is there.

Version: 4.0(4)SV1(2)

TIA,

Ryan

Ryan,

You can configure jumbo MTU configuration for 1000v.

This is a two step process. You must ensure the underlying NICs are configured to
support jumbo frames, and then set up the 1000v to support the larger MTU as well.

There's info on setting the vmnic/vmkernel port properties here:
http://blog.scottlowe.org/2009/05/21/vmware-vsphere-vds-vmkernel-ports-and-jumbo-frames/


As you know you can only create jumbo MTU on Ethernet interfaces (or Port Channel interface) - not vEth.

To configure Jumbo frames on the 1000v you first need to increase the system MTU and then set it on the
Port channel (if not using Auto Channel Groups, then perform this on the Ethernet interface directly).

n1000v# config t

n1000v(config)# system jumbomtu 9000

n1000v(config)# interface po1

n1000v(config-if)# mtu 9000

n1000v(config-if) show int po1

n1000v-AV(config-if)# show int po1

port-channel1 is up
  Hardware: Port-Channel, address: 0050.565a.072f (bia 0050.565a.072f)
  MTU 9000 bytes, BW 10000000 Kbit, DLY 10 usec,
     reliability 255/255, txload 1/255, rxload 1/255
//-SNIP-//

http://www.cisco.com/en/US/partner/docs/switches/datacenter/nexus1000/sw/4_0_4_s_v_1_2/interface/configuration/guide/n1000v_if.html

Regards,
Robert

Robert,

Very good, thanks.

I was a bit confused since I saw that option in the port-channel, but not under the interface as document was suggesting.

We're going to persue the iSCSI performance issues from this angle first, since it's one of those obvious things we need to address up front. Hopefully this solves everything.

Thanks again.

Just an update incase anyone is interested.

We managed to resolve this it seems, and it wasn't anything to do with MTU settings. Chad Sakac is my hero, so mucho props:

http://virtualgeek.typepad.com/virtual_geek/2009/08/important-note-for-all-emc-clariion-customers-using-iscsi-and-vsphere.html

This was kind of an oddball one for us that actually looked and felt like a network issue, except for the fact that we knew we weren't exhausting our available resources. Guess it ended up being more of a vSphere/EMC issue, though some adjustments to the layout from a network standpoint were required.

Still, we'll baseline it at 1500 in a working form, and see what increases we get from jumbo frames.