cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
986
Views
10
Helpful
10
Replies

Ip module compression

Arueda
Level 1
Level 1

. I am running fcip on and I get a compression ratio of about 1.7, how can I improve this?

The link is a 6meg link with a latency of 3-5ms under load.

There are 3 fcip tunnels between 3 9509s and one 9216 on the far side, we are using EMC2 mirrorview

10 Replies 10

tblancha
Cisco Employee
Cisco Employee

The compression ratio depends on the dataset being compressed and what mode it's running. The compression schema changed from 1.3 to 2.x. In 2.x and at 6M/sec, you should be running mode 3. This will be software compressed in software. At 3-5 ms RTT, it probably is not preferable to be running compression. At these speeds, it might be faster to just send the data out uncompressed than wait for SW compression, but this would need to tested. If this is a 9216i and not IPS blade, you could try mode1 compression because that is done in HW but more than likely at this RTT speed, it will not gain you anything. In general, at 6M, run mode3.

thanks... but it seams to be that it does not matter what mode i use.

In mode 2-3 i get 1.4-7 and 100% of packets compressed.

In mode 1 i get about the same compression with about 70-80% of compressed packets and the rest without compression.

since we are live, I do not know of a way to test what is faster.

do you have any ideas?

thanks

So, at 1.7, it means there is alot of randomness to your data and it's not extremely compressible to the algorithm. Especially since all modes get the same compression ratio. Are you having a performance issue?

This is my problem, I have five EMC2 arrays replicating to one array in our dr site. These five arrays connect to two MDS9509 (mds1 and mds2) and there is an fcip link from Mds1 to mdsdr, and another fcip link from mds2 to mdsdr.

I have the same amount of luns that replicate through each of the fcip links. Each of my links are five Megs each; I pay the carrier for the 95 percentile of the max bandwidth used. Therefore, if I use, for example, seven Meg for 5% or more of the time throughout the month, I need to pay for 7megs for the entire month regardless I use that much the rest of the time.

With this in mind, and wanting to distribute the load on the arrays evenly, I do not make good use of all the bandwidth I pay for.

So I guess that the first thing I noticed was the low compression ratio I am getting out of these IPS blades (if I compare them to the compression ratios I was getting on my CNT boxes (between 8 12 times) so I wanted to resolve that problem first if possible. Then I want to figure out how to load balance across these 2 fcip links so I do not waste available (and paid for) bandwidth. Sometimes one of the fcip links is maxed for a couple of days and the other is sitting there doing nothing.

Do you have any suggestions?

Thanks!

More than likely, it will not be possible to increase the compression ratio. But, if I understand this correctly, each FCIP tunnel has it's own 5M link? So, there are 5 pathways that are 5M each? If so, I would look to 3 things in the design. I would actually run 1 FCIP tunnel that carries 5 vsans instead of 1 vsan per fcip tunnel. And then I would use IP CEF per-packet in the routers to roundrobin the IP traffic over the 5 links for an aggregate of 25M. Then any of the 5 arrays , whether active or not, would have a total of 25M available. Secondly, I would look to write acceleration if not already implemented. SRDF and SANCOPY support FCIP-WA and Mirrorview does not. Third thing is if the network supports it, I would run jumbo frames. It could be possible that this would take 40 bytes of overhead off a frame everyonce a while and while not significant, might add up over a month's time.

The actual topology is like this

mds1-mdsdr use fcip2 with 5megs

mds2-mdsdr use fcip3 with 5megs

in mds1 I have the replication port-A for all the source arrays.

in mds2 I have the replication port-B for all the source arrays

In mdsdr I have the replication ports A and B for the target array.

The source LUNs are equally distributed between port A and B on the source arrays.

The target LUNs are in the same DR Array distributed between Port A and B.

I would like to load balance, so I can always use the max bandwidth possible i.e. the aggregate throughput of fcip2 and fcip3 for all the LUNs; regardless on which of the mds1 or Mds2 the source LUN is in.

In other words, let us say that I only replicate one LUN, and that LUN is mds1, I want to be able to use the 2-fcip links to replicate this LUN.

Does this make any sense to you?

Do you want me to open a case so you get credit for all this advice?

Thanks

Astolfo

I don't need the credit but it might be worth while to open TAC case to have a vehicle to investigate it deeper and also get someone else's opinion beside mine.

Since the FCIP tunnels terminate on different end points, it will be impossible to aggregate them from a MDS point of view. The only thing I can think of is to use IP CEF per-packet over the two links with equal cost pathing to send the IP packets over the two 5M links. At the speeds that you are working at, the MDS TCP stack probably will not take a performance hit if there are any out-of-order frames due to async routing. Then, depending on when and how much traffic there is, you can run both FCIP links at 5M min and 10M max. But this actually creates an oversubscribed IP cloud. So, that when only 1 is active, it will be able to get 10M and when both are active, it should throttle down to 5M each.

Try the jumbo frames and the FCIP-WA things one at a time if your HW supports these things.

If you create TAC case, have the owner email me the case number so I can look it up and get in touch with them on it.

What I am thinking about is to trunk a couple of ports between MDS1 and MDS2 and use FPSF so there will be 2 fcip links set at 10Megs but use just one using rout costing.

Do you think that this would work.

Yes, that would work since at 10M, your replication will probably not notice that there is another hop in the link. FCIP WA would really help out in that situation if the repliation schema allows for it. I would think about running one FCIP link with VRRP between the MDS switches. You can use the port-track feature to provide some redundancy to shut down the FCIP or GIG port depending how what you want to build redundancy into. Or you can reverse it, have the local FC port shutdown if the FCIP or gigaE port goes down.

this is the case number 603651665

thanks!

Review Cisco Networking for a $25 gift card