cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1076
Views
0
Helpful
7
Replies

Write Acceleration on FCIP tunnels degrading Mirrorview performance

usabnis
Level 1
Level 1

Disaster Recovery(DR)for storage:

Odd fabric:MDS1 (fcip1)----> MDSDR(fcip1)

Even fabric: MDS1(fcip3)----> MDSDR (fcip3)

Link details: b/w=2.0g,latency(7ms)

fcip Link on the MDS is on 14+2 card.

Ip compression mode=1 turned on.

WriteAcceleration turned on.

Without WA enabled, fcip1 traffic is in MB consistently.

With WA enabled, fcip1 traffic degrades to < 5KB. Lots of small 100-300B pkts seen on the Source TX and receive RX of fcip1.

7 Replies 7

Arueda
Level 1
Level 1

which mirrorview?

if sync then you should turn WA off because cisco's WA engine is not consistent.

if Async, i would try putting the compression on auto and see what happens.

EMC- Mirrorview A ( async) running on Clariions.

Will try compression on auto.

also look at you retransmits, mtu and set both the cwm and send buffer size to 0. and then start looking and playing with it.

I do not get good compression from Cisco neither,I do MUCH better with my old CNT boxes.

tblancha
Cisco Employee
Cisco Employee

WA and compression are completely different events. EMC's mirrorview is not an option for WA so please take it off. WA locally responds to the SCSI write command sent from the array. It is one of very times that the MDS switch will involve itself at the SCSI level. But mirrorview doesn't use the normal write and transfer_ready response and so WA doesn't work. EMC's SANCopy and SRDF does. Also, WA for these works some but definitely not as much as sync.

Thank you for your response. Can you elaborate on "But mirrorview doesn't use the normal write and transfer_ready response and so WA doesn't work"

If this were the case why does mirrorview not stop working instead of degrading the performance.

Also I had tested this in the lab with Mirrorview-A between two Clariions. I did not see any degradation or enhancements. I attributed the latter to the negligible latency (107microsec) in the lab test case.

This afternoon I repeated FCIP lab test with Mirrow-A running between the Clariions. 100GB LUN was setup for MirrorView-A replication. Attached are the results.

With WA on the performance degraded slightly ( came down to 28-29M from 30-31M)

So I would leave WA off if there is no performance gain. So, you have 2G total BW for 2 FCIP links? So, what are the min/max values on the tcp profile? If this is more than 100MB/sec, I would suggest leaving compression off as more than likely it takes longer to compress than just to send it. Compression is done in HW in mode1 on the 14/2 and 9216i cards and done in SW for the IPS4 and IPS8 cards. Above 100M you will see more gain from setting the MTU from end to end to 2300.