cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
837
Views
0
Helpful
4
Replies

ACE Shared vlan context design

Paul Pinto
Level 1
Level 1

Good day,

I am presented with a design requirement and would like some advice.

There will be say two servers in a serverfarm (message brokers). All traffic will be directed to this serverfam. The challenage is that there are potentially dozens and dozens of applications/services that will be fronted by these message brokers. How do we manage and monitor resouces on the ACE (the Data Power servers do this themselves but we need to do this on the ACE as well surely) and troubleshooting could end up being a bit of a nightmare.

- Could we create a context per application/service with the same real servers in each (being the data power message brokers), so VIP per service?

- We would need to have the server's in each context on the same network, so a shared vlan. Is this and issue?

- Could we have a unique subnet per application/service for client side and the shared vlan for the server side per context?

- Or shared vlan on client and server side? Which is prefered or best practice?

- Are there any issue's with this type of implementation besides the shared-vlan-hostid?

- I am sure there would be issues around the different serverfarm's not being aware of each other if something like least-connections is used, but if round-robin is used? Also how would fuctionality like stickiness etc. be affected? I don't think so as this is context specific in my understanding.

Any advice or input would be greatly appreciated.

Thanks in advance.

Paul.

4 Replies 4

Gilles Dufour
Cisco Employee
Cisco Employee

Paul,

I don't see the need for multiple context.

Are your servers listening on different ports ?

Or do they have multiple ip addresses but this is just one physical box ?

You could simply split the applications using their associated destination ports.

Have multiple serverfarm, and multiple policies.  One for each application/port.

You can then collect stats per policies or serverfarms.

No need to increase complexity here with multiple contexts.

I may have misunderstood the way those broker servers work.

If this is the case, forgive me, and try to put me on the right track.

Thanks.

Gilles.

Gilles,

As always, thank you for your response and suggestions.

The DataPower message brokers will mostly be listening for HTTP (so say 7080) and HTTPS(so say 7443). There may instances where a particular application/service has it's own port. With an application service using it's own port, we can implement as you suggested. The Data Power's are essentially XML security Gateway's.The brokers provide integration with transaction processing systems such as CICS and IMS, advanced messaging and performing variuos other functions. These form part of an ESB.

My concern is how we can manage and monitor the resource allocation and usage for a given application/service that the Data Power and message brokers are front ending for. If say 20 applications/services are all connecting on HTTP port 7080 and 10 of these applications potentially require more cps, tps, bandwdith, etc. than the remaining 10, how would we manage this. Also if a problem arises for a particular application/service, and resource usage stats indicate a problem (say denies), how would we be able to identify the application/service in question (which would then be impacting other applications/services).

Just posing the potential issues as would not like to run into these later and have to present to the client that a re-design may be required.

Thanks again for the input.

Paul.

Hi Gilles,

I hope I was able to clarify the issue a bit better for you. Also does my concern make sense and does it change you view?

Any input would be appreciated.

Thanks again.

Paul

Hi Gilles,

I wonder if you could shed some light on a developement here.

We have service that is currently being load balanced. Very basic -

1. VIP

2. port 9081

3. two servers

4. one serverfarm

5. stickines required (originally and currently on source IP)

Issue now is that the client connects via the Data Power servers mentioned previuosly, which first of all proxies tha connections making source IP stickiness pointless (we are not getting any load balancing) and the client connection via HTTP gets modified by Data Power into SOAP/HTTP xmlns.

My question is could we use Layer 4 payload stickiness? I have attached the data portion of the first frame after the TCP setup (all traffic seem as TCP), have also added port 9081 to Wireshark and only see two HTTP frames once all data transaction has completed), no cookies.

What could we use from this, if anything, for the stickiness on layer 4 payload? I was thinking of the FE_GEN_USER_ID if it is unique for each user, so config something like:


sticky layer4-payload PAS-STICKY

layer4-payload FE_GEN_USER_ID

timeout 20

replicate sticky

serverfarm SFARM-PAS

policy-map type loadbalance generic first-match LB-POLICY

class class-default

sticky-serverfarm SFARM-PAS

Am a bit stuck on this one. Many of the other existing servics that are currently being load balanced will eventually migrate to be front-ended by the Data Power devices, so I am sure we will run into this again.

Any assitance would be greatly appreciated as always.

Thanks in advance.

Paul.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: