I have a pair of Catalyst 3560 GB switches that are trunked with two of the standard ports, and that have trunk ports connecting to a failover pair of PIX 515e's. We're considering adding a pair of cluster database nodes and an iSCSI SAN, both of which would need a dedicated interconnect VLAN that I'd like to employ Jumbo frames on. I don't necessarily need the VLANs to traverse the firewall trunks since they're private interconnects, but I need each host to traverse the switch trunks.
Since it seems I can only enable Jumbo frames on the entire switch (current standard frame size is 1500 and jumbo is also 1500), when I enable it what kind of possible negative impact could this have on my trunked ports as well as my host connections? I've read mixed reviews of users with iSCSI SAN devices seeing terrible performance when enabling jumbo frames so I'm apprehensive about enabling them on an existing network.
Any insight provided would be greatly appreciated.
What I have seen in the past is when jumbo frames aren't enabled through out the whole path there are problems. Also, so long the servers are good with sending 1500 to users and only the iSCSI SAN is running 9000 size packets to the servers you should be good. Also, the iSCSI SAN should be on it's own NIC for running the jumbo frames and such with the users attaching to a different NIC for application access. This is where I usually see the problems is that the customer is running back-ups and application access over the same NIC and thus your jumbo frame issue.
Just a few thoughts from what I have seen in the past.
I would do it, because otherwise you'll end up with performance problems on the iSCSI backups not completing in time for the backup windows and such.
I would like you expert advice after reading your reply.
I have a similar situation and have been searching for insight. I hope you can shed some light on my issue.
I have introduced a Catalyst 3560 as my internal router for 4 different subnets.
I have an iSCSi SAN attached to the network and is mainly used for backups from my backup server.
My backup server is a windows 2003 and has many iSCSi initiators connecting to the SAN which is on the same subnet as my backup server.
As soon as I introduce my new L3, the backup server becomes unuseable and unresponsive. After a few reboots it went normal, but later that night I received email notifications from Nagios that all servers across subnets had high latency and even dropped packets. This happened for 2 hours straight. Then went away. The next morining the same thing happened. I then removed my L3 and put back my old linux router who previously handled internal routing and all was fine.
I also have a dell powerconnect switch connecting to my L3. I have disabled autonegotiate to 100m to match my L3, thinking this could be the issue...
I changed the MTU size for jumbo frames and system MTU and crossing my fingers,
In all your experience, do you notice any concidence or am I just spinning my wheels?