cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1895
Views
0
Helpful
1
Replies

Nexus 7000 PBR, VPC, Routing with Netscaler Offload Design Issue

sullivan.p
Level 1
Level 1

Hi All,

Hoping a Nexus/Datacenter/Routing Guru can help me out here.

Intro

I work for a large corporate/financial organisation, on a very large virtual desktop infrastructure project (15K+ desktops). The company I work for are investigating options for integrating Citrix Netscalers into their datacentre environment so they can utilise the Citrix HDX Insight technology for reporting on Citrix ICA session latency.

The environment consists of 2x sets of Nexus 7000's (7004's and 7010's) configured in their standard VPC model. With Cisco UCS as the compute platform powering a Citrix XenDestktop environment.

We had a session with the local Citrix tech today, who recommended the best option (after we discussed all the alternatives, i.e inline, span port etc) was to utilise Policy Based Routing on the N7K's closest to the desktop Pods to split ICA traffic sourced/destined to/from the various desktop pods out to certain Netscaler clusters and leave all other traffic to the normal North/South routing paths. Reasons for this being the fact there are no Netscalers big enough to handle the load we can generate, and the fact they are CPU based switching/routing devices meaning all of the other non-ICA traffic would likely be slowed down considerably if we placed them inline, price was also an issue and we were told it is less expensive to use 6x smaller devices instead of 2x bigger ones.

NOTE - I've also attached a diagram in PDF format - for your reference which details the proposed physical layout, logical and flows.

preview.PNG

Background

The network is fairly straight forward, each UCS Pod has a separate set of VLANs and subnets whose gateways are provided by the N7004's using HSRP. In the existing environment, all traffic is then just default routed to the upstream N7010's where it is then routed upstream to the WAN (in reality it's way more complex than this, but for the sake of brevity, let's assume the WAN is attached to these N7010's).

What the architect has recommended is implementing the Citrix Netscalers in an Active/Standby cluster off the side (or above) the N7004's and using PBR to offload all ICA traffic to the Netscalers based on which POD the traffic is destined for or sourced from and leave the non-ICA traffic to the normal routing paths i.e north directly to the upstream 7010's.

This will involve PBR on ingress into the N7004's from the upstream N7010's and on ingress from the UCS Pods to allow this ICA traffic to be directed to the correct netscaler cluster for processing. The Netscalers will run in layer 3 mode obviously (not pass-through), and then default route traffic back to the N7004's (effectively causing a loop in my opinion). But, theoretically to use traffic coming southbound from the WAN for example - after the N7004's send traffic to the Netscalers as a result of PBR, when the return traffic is passed back to the N7004's from the netscalers, the normal route table will then send the traffic downstream into the UCS Desktop Pods.......... No? Yes? No?

The Netscalers normally run in reverse-proxy mode, replacing the src-ip, but I've been assured they can run in normal routed mode also.

My intuition tells me this isn't a great idea, but I'm having trouble actually pinning down some exact reasons for this. In theory it should work using PBR, but I'm suspicious it will cause us grief down the track. This is a very non-standard design and I would welcome people's feedback on any issues you think this might cause now and/or down the track.

My particular concerns are around what impact this will have on;

  • VPC layer 3 rules
  • Troubleshooting(!)
  • Load on the N7004's are we're now duplicating all ICA traffic.
  • Reverse path framework checks
  • Loop prevention mechanisms on NX-OS
  • Multicast and or network control protocols.
  • Any other area's I haven't thought of.

Look forward to your responses,

Cheers,

Patrick.

1 Reply 1

Marwan ALshawi
VIP Alumni
VIP Alumni

were you able to fix it ?

why dont you configure source NAT

inbound should go to the VIP of the LB

outbound based on the VIP - source NAT traffic should be routed back to the LB for a session came through the LB this is more scalable than PBR