cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
5591
Views
10
Helpful
21
Replies

Nexus 5548 - Server Group Complaining of Slow Windows 2012 Copies

TOM FRANCHINA
Level 1
Level 1

We are new to the Nexus line. 2 servers were installed on the Cisco Nexus 5548 with 10 gig NICs. The server group is saying that some database copies are taking just as long as they were on 1 gig cards but some of the SQL database copies were up to 4 times faster:

Here is the basic config that we are using. All interfaces are configured as int e1/1 and only SNMP and Username/Passwords were deleted.

Any suggestions on the best practice config for Windows 12 Server would be appreciated.

Thanks,

Tom

version 5.1(3)N2(1)

hostname Nex-Comp-center-10.23.10.2

feature telnet

no feature http-server

feature interface-vlan

feature lldp

feature vtp

ip domain-lookup

class-map type qos class-fcoe

class-map type queuing class-fcoe

  match qos-group 1

class-map type queuing class-all-flood

  match qos-group 2

class-map type queuing class-ip-multicast

  match qos-group 2

class-map type network-qos class-fcoe

  match qos-group 1

class-map type network-qos class-all-flood

  match qos-group 2

class-map type network-qos class-ip-multicast

  match qos-group 2

ntp server 192.168.23.21 prefer

vrf context management

interface Vlan1

interface Vlan23

  no shutdown

  management

  ip address 10.23.10.2/16

interface Ethernet1/1

  description server port

  switchport access vlan 23

  spanning-tree port type edge

*** All 32 interfaces same as above ***

interface mgmt0

  shutdown force

clock timezone EST -5 0

clock summer-time EST 2 Sunday March 02:00 1 Sunday November 02:00 60

line console

line vty

  session-limit 30

  session-limit 30

boot kickstart bootflash:/n5000-uk9-kickstart.5.1.3.N2.1.bin

boot system bootflash:/n5000-uk9.5.1.3.N2.1.bin

ip route 0.0.0.0/0 10.23.0.1

21 Replies 21

Darren... We are using the Qlogic 8242 NIC

Steve... These are state of the art 1000gig RAM and multiple Quad cores Servers... I will certianly pass along the link you sent me. I will try to post the NETSH asap

I would bet on bad firmware and drivers. We saw an issue where we could only push 3Gb on the 8242 running ESX 4.1 VMs and required QL to make changes in the firmware and driver. Then once they believed it was fixed, we found we could only push 6Gb if we used port1, but 9Gb on port2. QL fixed that with a firmware change but then we upgraded from ESX 4.1 to ESX 5 and could only push 8G. Was not an issue on 4.1 or 5.1 only ESX 5. They have since fixed that for us too. We have never seen any performance issues on Windows 2008 bare metal, but the issues on ESX indicated there could also be issues on Win2012 bare metal with new drivers, and IMO QL seems to be very sloppy with their QA process.

It took several months back in Dec 2011 to convince them of our 3Gb issue, but now I have contacts inside QL and they take us seriously instead of the stupid and time wasting suggestions you typically get from Tier1 support.

Here is a Win2008 8242 Tolly test that is pretty much what we saw in our perf testing....

http://www.qlogic.com/Resources/Documents/MediaCoverage/Tolly211109QLogicCNAPerformanceAndFailover.pdf

Please rate helpful posts.

One note to add, we did have a file copy speed issue around 2 years ago with QL 8152/W2008 and it ended up being the storage driver. Did not see the issue with performance testing but resulted in very slow file copy. Also, below are some Cisco docs on 10G windows setup and VM perf testing.

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/C07-572828-00_10Gb_Conn_Win_DG.pdf

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/c07-601040-00_vm_10gbn_dn_v2a.pdf

Please rate helpful posts.

Steve & Darren,

Nice read. 

Steve Fuller
Level 9
Level 9

H Tom,

Did you make any progress with this? I'm intrigued as to what the problem is/was.

Regards

Sent from Cisco Technical Support Android App

Why not run LACP?

Hi Steven,

From what Tom has told us the throughput that was being achieved was only around 500mbps so not enough to fill a single GE link, let alone the 10GE link that was being used. In this case aggregating links would have made no difference.

Also note that link aggregation doesn't actually increase the throughout for a single conversation as all traffic for a conversation can only be carried across a single link of the aggregate. This is to ensure ordered delivery of packets.

The concept of a conversation can vary though, with traffic balanced across the links in the aggregate based on a combination of source and/or destination MAC, IP and TCP/UDP port number, depending what is available and configured on the platform.

Regards

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: