cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1733
Views
1
Helpful
1
Replies

IVR (Inter-VSAN Routing) - Best practices questions

mark.j.smith
Level 1
Level 1

Hi there,

We have a situation where we will have multiple customers hosted on a 9513 and sharing a single storage array.

We want to keep the logically seperated in their own VSANs, but of course the storage array will need to be zoned to all the customers hosts.

Now IVR should be the thing to use, but I'm getting resistance from the local team (screams of "Nooooo!!! They're EVILLLLL!!!") ... so I want to find out if there are some best practices around IVR use ?

Should they be used only for light duty stuff ? (though at present we use them with tape backup, which isn't exactly "light")

Do they impact performance to a measurable degree ?

Are they stable ?

What can go wrong with them ? And does it happen often ?

Thanks!

1 Reply 1

dmcloon
Level 1
Level 1

IVR does not impact application I/O performance because all the vsan rewrite and fcid rewrite actions are done in hardware asics. The IVR process on supervisor is responsible for managing the configuration and ensuring the rewrite tables are programmed in the linecards. The process is stable.

Most of the issues I have seen are in environments with multiple IVR enabled MDS switches ISL'd together or an MDS IVR enabled switch connected to a McData/Brocade in an interop mode.

Like any feature there have been bugs and it pays to check the SAN-OS release notes when planning installs. For example, a config change on one switch does not get properly pushed to another IVR switch or a forwarding table for an ISL interface does not get correctly programmed. There have also been a fair share of user misconfigurations which could have been avoided if Cisco Fabric Services (CFS) was enabled for IVR. This is done with the 'ivr distribute' command. Without this it is very difficult in large topologies of multiple IVR switches to ensure they have a consistent IVR config. In other cases there have been problems from a mix of IVR enabled switches running different releases of SAN-OS, e.g. mixing 3.0 with 3.2.

Best practice is to have dual physical fabrics, upgrade one fabric at a time ensuring all IVR switches in a fabric run same SAN-OS release.

A single IVR switch is much easier to implement. The MDS Configuration Guide has a list of best practices for IVR and one of those is to use the NAT option. Personally I would avoid NAT option where you can as NAT makes any troubleshooting harder trying to figure out the domain ID translations. You would also minimize risk of hitting some NAT related bugs, but you could also avoid most of these by checking the workarounds documented in the Release Notes. And with NAT you need to also configure persistent virtual domains and fcids to cater for AIX and HP-UX systems that cannot handle the FCID of the target changing whenever the exported virtual domain ID changes. To give NAT credit, each vsan is represented by a single virtual domain. In regular non-NAT mode, each switch in a vsan is represented by a virtual domain, meaning you eat up more virtual domain IDs. So in large topologies with many domain IDs there are scalability advantages to using NAT and the IVR updates between switches are more efficient with fewer virtual domain IDs to advertize. Of course, NAT must be used if merging physical fabrics with same domain ID and you cannot afford the downtime to change one of the switch domain IDs.

However if it is just a single IVR switch I would avoid NAT. To do this all your domain IDs should be statically defined and there must be no overlapping domain IDs between IVR'd vsans. If it is a brand new install you can easily achieve this by specifying unique allowed domain ID ranges per vsan.

For example, each customer can have their own vsan with say 10 domain IDs and the storage can be in vsan 2. You will only use one domain ID per vsan on day 1. Allowing 10 domain ids per vsan means you can add up to 9 other switches per vsan should you need to in the future. There is a maximum 239 domains per vsan so you could have up to 23 customers on your 9513 working with a range of 10 domain IDs per vsan.

fcdomain domain 2 static vsan 2

fcdomain domain 10 static vsan 10

fcdomain domain 20 static vsan 20

fcdomain domain 30 static vsan 30

..and so on..

fcdomain domain 230 static vsan 230

fcdomain allowed 01-09 vsan 2

fcdomain allowed 10-19 vsan 10

fcdomain allowed 20-29 vsan 20

fcdomain allowed 30-39 vsan 30

..and so on..

fcdomain allowed 230-239 vsan 230

With or without IVR you should still run dual fabrics (e.g. 95xx in each fabric) and host based multipathing for redundancy.

And don't forget IVR will require an Enterprise license. I have even seen a large outage because the customer forgot to install the license before the 120 day grace period expired.