Showing results for 
Search instead for 
Did you mean: 


Level 1
Level 1

Can someone clear this up for me please.

When designing an ISCSI SAN do you keep the ISCSI traffic separate from the core network? I've got a blade enclosure, switch and SAN. I'm planning to connect the blade and SAN via the switch and create one VLAN for the ISCSI traffic. I'm then planning to connect the core network to the blade chassis to manage the SAN. Keeping the SAN switch ISCSI VLAN separate from the core network.

However my platforms team are telling me I create a ISCSI VLAN on the core network and link it into the SAN switch via a trunk so that the ISCSI VLAN is seen from the commercial network. This doesn't seem right.

Can anyone shed some light through their own environments?



Sent from Cisco Technical Support iPad App

4 Replies 4

Level 1
Level 1

What kind of SAN are you planning to install?

I would keep ISCSI and LAN traffic physical seperated unless you're using Nexus switches. Why do you want the ISCSI network reachable from networks outside the server network?

If your only interested in management I'm certain that your ISCSI SAN has an Out-of-Band management that you could route however you like since it does'nt use Jumbo Frames.

If you do want to access your ISCSI resources from other networks you should be aware of performance issues that will occur if you're not running Jumbo Frames on all networking equipment along the chain.

Hi,  thanks for the response.

It's a HP left hand SAN P4000, P3000 and a 10G HP switch. Before you ask I'm a Cisco man and manage a Cisco infrastructure, this is our first venture into something non Cisco, I always get more sense from this board..

We set up the system like this:

2x10G cables running from our core 4510RE switch and connect into the HP P3000 blade enclosure. VLANs 4,5 which are core server VLANS sent via this link.

On the HP P3000 in addition to the core switch uplinks we have 2x10G uplinks connecting to the HP switch which is on VLAN 200 and running ISCSI on

On the HP switch we have the two 10G links coming from the blade enclosure and we also have the P4000 disk enclosure connecting in via 2x10G links also running on VLAN 200

There is one CAT5 connection from the HP switch to our core switch which is configured on VLAN 200 and acting as a trunk port between the HP SAN Switch and our core network.

It's the last point that doesn't seem right to me? Our platforms guys are saying this is correct ad they need to manage the ISCSI network via the commercial network and this is the only way to do it?

Is a VLAN separation enough for SAN ISCSI separation?



Level 1
Level 1

I have a similar problem. Did you find any solution?



Peter Thomas
Level 1
Level 1

Hi Rod,

In a previous life I deployed a 1G iSCSi environment on a pair of 4507E+R chassis using a seperate iSCSi VLAN.

All other traffic was managed on the same core switches without any problems.

Solution deployed:


NetApp NAS2040 - replaced with VNXe3300
Dual SP on the VNXe with 4 x 1G I/O module per SP
2 x 1G port used in team for iSCSi and 2 x 1G port team used for normal server traffic - per SP. (Full redundancy)


2 x 4507R+E chassis
1 x WS-X45-SUP7-E per chassis
1 x WS-X4648-RJ45-E per chassis
1 x WS-4748-RJ45V+E per chassis

Each SP was dual pathed to both 4507's for both iSCSi and server VLAN traffic.

Port-channels created on the 4507's for throughput.

I'm currently working on a new solution to integrate the following:

EMC VNX5300 (dual SP, 2 x 10Gbe cards per SP)
3750X stack with 2 x 10Gb uplinks per switch
3 x vSphere 5.1 hosts w/ dual 10Gb HP NC550SFP adapters per server

This will interconnect to a Cisco 4500-X 10Gb switch using TwinAx cables (cheaper than SFP modules).

However this is not straight forward as EMC to Cisco & HP adapters to Cisco require passive or active cables...of which I cannot find any white papers on to verify.