cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
891
Views
0
Helpful
12
Replies

vserver idle timer for direct access virtual

randallcourtney
Level 1
Level 1

We have a virtual configured for direct access that we tried to set the idle timer to infinite (0), but about 10 minutes after doing so the CSM-S stopped responding. Is there a problem with setting the timer to infinite on a Cat 6500 w/ CSM-S v. 2.1.1?

vserver DIRECT_ACCESS

virtual 0.0.0.0 0.0.0.0 any

serverfarm DIRECT_ACCESS

idle 0

replicate csrp connection

no persistent rebalance

inservice

serverfarm DIRECT_ACCESS

no nat server

no nat client

predictor forward

description Direct Access to All Back-end Servers from Outside

12 Replies 12

Gilles Dufour
Cisco Employee
Cisco Employee

there is no known issue with the infinite timeout.

You should capture a sniffer trace of the csm portchannel when the csm stops responding as well as a 'sho tech' on the cat6k.

Regards,

Gilles.

Is this true even if the vserver has a very high number of connections? Could the CSM have a resource issue with handling that many infinite connections?

The CSM can handle approximately 1 million connections.

If you are close to this number of active connections, this could be a problem.

But that's normal behavior - not a known issue.

The risk to get closer to this number will be higher with an infinite idle timeout.

Gilles.

I'm going to adjust the idle timer to a finite value for the forwarding vip that handles incoming traffic. Now, where is the idle timer for outbound connections that are initiated by the real servers on the CSM-S?

the solution for outbound connection is to catch them with a vserver as well if you want to modify the idle timeout.

Regards,

Gilles.

I found the issue with setting the idle timer to infinite on the vserver. When you set the timer to infinite, you must also set the variable INFINITE_IDLE_TIME_MAXCONNS to a large enough value to handle all the connections. We decided to try this and we set the var to 1M and the idle timer to 0.

The problem that I see now that I have an infinite timeout is that none of the UDP connections are being cleaned up and the connection count is growing and will eventually hit the limit.

The vip with the idle time set is a forwarding vip that we use for all incoming and outgoing connections.

vserver DIRECT_ACCESS

virtual 0.0.0.0 0.0.0.0 any

serverfarm DIRECT_ACCESS

replicate csrp connection

no persistent rebalance

inservice

serverfarm DIRECT_ACCESS

no nat server

no nat client

predictor forward

description Direct Access to All Back-end Servers from Outside

Is there anyway to have the UDP connections closed for the vserver without setting the idle timer?

you could separate your vserver in 2.

one for udp and one for tcp.

You could keep an infinite idle timeout for tcp and for udp, I think you could use a low idle timeout and configure sticky source ip or predictor hash address source. Like this, if you lose the flow, a new one is created and linked to the same real server.

G.

We decided to try splitting the vservers, but ran into issues when bringing the new vservers online.

We are seeing TCP/UDP/ICMP traffic on the vserver, so we created a new TCP and UDP vserver and made them operational - leaving the current "any" protocol server enabled.

The vserver for UDP started showing connections immediately, but the TCP server showed 0 connections and traffic started timing out. We then disabled the ANY protocol vserver, changed it to ICMP, and re-enabled it , but that did not correct the problem. The new TCP vserver had to be disabled and the "any protocol" vserver reconfigured for the connections to resume.

We currently are using the ANY protocol vserver and the UDP vserver simultaneously without issues, but are confused why using 3 vservers - (ICMP/ANY)/TCP/UDP did not work. Could have the existing "any protocol" vserver been conflicting with the new vservers?

My assumption was that it would work even if you defined a vserver for TCP, UDP, and left the ANY in place it to handle ICMP or any other protocl. Is there no precedence logic in place?

Randall Courtney

Randall,

I did the test exactly as you described and I did not see a problem. The TCP traffic was correctly matched on the corresponding vserver.

Here is the config I used

vserver ANY

virtual 0.0.0.0 0.0.0.0 any

serverfarm ROUTE

persistent rebalance

inservice

!

vserver ANY-TCP

virtual 0.0.0.0 0.0.0.0 tcp 0

serverfarm ROUTE

persistent rebalance

inservice

!

vserver ANY-UDP

virtual 0.0.0.0 0.0.0.0 udp 0

serverfarm ROUTE

persistent rebalance

inservice

!

And the following show command indicating that the icmp traffic is matched by the ANY vserver and the telnet traffic by ANY-TCP

c6k-msfc1#sho mod csm 5 conn vser any det

prot vlan source destination state

----------------------------------------------------------------------

In ICMP 20 192.168.20.38 192.168.30.27 ESTAB

Out ICMP 30 192.168.30.27 192.168.20.38 ESTAB

vs = ANY, ftp = No, csrp = False

c6k-msfc1#sho mod csm 5 vser name any-tcp

vserver type prot virtual vlan state conns

---------------------------------------------------------------------------

ANY-TCP SLB TCP 0.0.0.0/0:0 ALL OPERATIONAL 1

c6k-msfc1#sho mod csm 5 conn vser any-tcp det

prot vlan source destination state

----------------------------------------------------------------------

In TCP 20 192.168.20.38:33793 192.168.30.27:23 ESTAB

Out TCP 30 192.168.30.27:23 192.168.20.38:33793 ESTAB

vs = ANY-TCP, ftp = No, csrp = False

If that does not work for you, we would need to see the exact config and some show commands to see the status of each vserver.

Regards,

Gilles.

Here is the config that we used - which looks the same as your config. I fully expected it to work like your test.

Showing the status and conns of the new TCP vserver after the change showed 0 active conns, but the number of total conns was increasing rapidly. All new tcp traffic started timing out and normal traffic was not restored until we diabled the new TCP-ONLY vserver.

vserver DIRECT_ACCESS

virtual 0.0.0.0 0.0.0.0 any

serverfarm DIRECT_ACCESS

replicate csrp connection

no persistent rebalance

idle 3600

inservice

vserver DIR_ACC_TCP

virtual 0.0.0.0 0.0.0.0 tcp any

serverfarm DIRECT_ACCESS

replicate csrp connection

no persistent rebalance

idle 0

inservice

vserver DIR_ACC_UDP

virtual 0.0.0.0 0.0.0.0 udp any

serverfarm DIRECT_ACCESS

replicate csrp connection

no persistent rebalance

inservice

not sure what happened.

Are you able to reproduce this problem any time ?

G.

Its a 24x7 production environment, so we are not able to test. Currently the ANY protocol vserver is handling all TCP and ICMP connections and we have a very long timeout - 120hrs - configured for it.

The UDP vserver is handling all UDP traffic and is configured to clear connections after 1hr. It was the protocol causing the most issues when setting the ANY server to infinite.

The connection count is not building as fast since the UDP flows were moved to the other vserver and we are likely to keep this config in place as long as we don't see a high connection count building up before the timer starts cleaning up.

If the conn count does start building, we may try configuring a new TCP vserver and then remove the ANY vserver config completely or rebooting the CSM after the changes. Anything else that you recommend?

Randall Courtney

Review Cisco Networking for a $25 gift card