cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2543
Views
0
Helpful
1
Replies

Slow Network Performance on UCS blade

Setup are 2 UCS blades in 2 datacenters 50 miles apart. However the backbone between the 2 is 40 Gigabits. The UCS's are connected with fiber at 10 gigs to a Cisco 3750 switch which is also 10 gigs. The San in one Datacenter is an EMC Isilon and the other datacenter used a Dell Equallogic. The 2 SAN's have a couple 1 gig connections. Here are my results.

A 3 gig file will copy between the 2 datacenters around a gig speed. That works from either a Virtual server's blade drives or from the Isilon which is NFS. Same with a physical Server 2008 on one of the blades that I built just for this testing. I would have thought that from one VM's own drives to the remote VM's own drives that I would get more than 1 gig speed.

On to a couple hundred thousand small files. Copying a test folder of 145,000 files that is about 100 gigs in size I get 10-12 MB's per second or about 1/10th of a gigabit connection. That happens between both datacenters and also from the Isilon's nfs share to the remote VM server (tested to the local VM's physical drive and also the SAN). Same results going from one VM server's physical drives to a Physical 2008 server in the other datacenter that I built for this testing.

It is just as slow (10 Mbs-copying that 100 gigs of data from the Isilon nfs share right to a VM's datastore which is also on the Isilon.)

any ideas here?

1 Reply 1

mhansen
Level 4
Level 4

Since in one of your other posts, you mentioned that "I am not a networking person and I usually use Network Assistant to configure my switches". I wanted to give you a good starting point for network performance troubleshooting. I have had to troubleshoot performance issues exactly like you described about 20 times over the last few years and what i have learned is that you really need to isolate the components you want to measure, determine the expected performance, then measure them before making any assumptions on the bottleneck.

Becuase it seems like there are a lot of devices involved here, I would start your network performance analysis with a tool like iperf. It is really good at measuring "possible" bandwidth between 2 servers. If you have a gig link, you will see 1 gig speeds from this tool. Also, if the speed it is showing is not reflecting what your environment looks like, then check for speed/duplex mismatches along the way. Rarely, if ever, will you ever get line rate transfer speeds using SMB/CIFS since the protocol is not very efficient. Also, transfering huge number of small files only makes the protocol (in)effeciencies more apparent   RAID level/cache and number of disks can also be a factor when you start talking about getting 1gig+ throughput.

Also remember how etherchannel (NIC Teaming) works. If you have multiple 1/10 gig links, one data stream can only go over 1 of those links so your single file transfer will only ever get a maximum of whatever the smallest link is in between. Even threading your test into multiple transfer might not make a difference without verifying your load-balance method. Hope this helps your methodolgy a bit, its where I would start here. These kinds of things might take a while to get to the bottom of without a thourough understanding of all the moving parts.

Review Cisco Networking for a $25 gift card

Review Cisco Networking for a $25 gift card