01-24-2023 09:57 AM - edited 08-20-2024 11:25 AM
This article is an example CLI configuration used to configure a Citrix NetScaler load balancer to work with Cisco ISE. The configuration shows load balancing both RADIUS (denoted with "rad") and TACACS (denoted with "tac") with each running on their own respective servers/PSNs. The example in this article was built and tested in NetScaler 11.0 and 12.0.
For a recommended configuration on F5 BIG-IP, please see the article How To: Cisco & F5 Deployment Guide: ISE Load Balancing Using BIG-IP.
A persistence rule will keep communications for a specific client to the same RADIUS PSN. This configuration uses the Calling-Station-ID (client MAC) as the key for persistence. The reason for not including the framed IP attribute
add policy expression [radPolicyName] "CLIENT.UDP.RADIUS.ATTR_TYPE(31)"
The server entries will include a server name and the IP address of the node. The server name here does not have to match the actual host name of the PSN but it would make it easier to track.
add server [servername_radPSN1] [radPSN1_IP]
add server [servername_radPSN2] [radPSN2_IP]
add server [servername_tacPSN1] [tacPSN1_IP]
add server [servername_tacPSN2] [tacPSN2_IP]
Service groups will allow you to group together the servers that were created. There are three service groups being created: RADIUS authentication, RADIUS accounting, and TACACS. The RADIUS authentication and accounting is separate because binding groups to ports only allows a single port.
add serviceGroup [radiusAuth_serviceGroupName] RADIUS -maxClient 0 -maxReq 0 -cip DISABLED -usip YES -useproxyport YES -cltTimeout 60 -svrTimeout 60 -CKA NO -TCPB NO -CMP NO
add serviceGroup [radiusAcct_serviceGroupName] RADIUS -maxClient 0 -maxReq 0 -cip DISABLED -usip YES -useproxyport YES -cltTimeout 60 -svrTimeout 60 -CKA NO -TCPB NO -CMP NO
add serviceGroup [tacacs_serviceGroupName] TCP -maxClient 0 -maxReq 0 -cip DISABLED -usip YES -useproxyport YES -cltTimeout 60 -svrTimeout 60 -CKA NO -TCPB NO -CMP NO
The virtual servers and VIPs will be used on the network access devices (NAD) to connect to the ISE nodes for RADIUS and/or TACACS requests instead of the PSN node IP. This simplifies configuration on the NAD because PSNs can be added/removed from behind the load balancer without having to change the configuration on the NAD.
RADIUS persistence is attached to the virtual server based on the persistence rule created previously. A backup persistence rule is also assigned, based on the source IP (NAD IP), in the event a Calling-Station-ID is not present in the RADIUS packet. TACACS (TCP) persistence will only be configured to use the source IP.
The timeout value for persistence has a default value of 2 minutes. This is too short so in this example it is set to 2 hours (120 minutes) for RADIUS and five minutes for TACACS. It is recommended to set this value to 1 hour beyond the highest value between wired and wireless reauthentication. That means if the reauthentication timer for wireless is set to 4 hours and wired is 8 hours, set this timeout to 9 hours (540 minutes).
add lb vserver [radiusAuth_vServer_Name] RADIUS [radiusVIPip] 1812 -persistenceType RULE -rule [radPolicyName] -timeout 120 -persistenceBackup SOURCEIP -backupPersistenceTimeout 120
add lb vserver [radiusAcct_vServer_Name] RADIUS [radiusVIPip] 1813 -persistenceType RULE -rule [radPolicyName] -timeout 120 -persistenceBackup SOURCEIP -backupPersistenceTimeout 120
add lb vserver [tacacs_vServer_Name] TCP [tacacsVIPip] 49 -persistenceType SOURCEIP -timeout 5
Bind the virtual servers to their respective service group
bind lb vserver [radiusAuth_vServer_Name] [radiusAuth_serviceGroupName]
bind lb vserver [radiusAcct_vServer_Name] [radiusAcct_serviceGroupName]
bind lb vserver [tacacs_vServer_Name] [tacacs_serviceGroupName]
Load balancing groups allow grouping together virtual servers. Persistence is configured on the load balancing group again even though it is already configured on the virtual server. Both persistence configuration match between the virtual server and the load balancing group.
add lb group [radGroupName] -persistenceType RULE -rule [radPolicyName] -persistenceBackup SOURCEIP
add lb group [tacGroupName] -persistenceType SOURCEIP
bind lb group [radGroupName] [radiusAuth_vServer_Name]
bind lb group [radGroupName] [radiusAcct_vServer_Name]
bind lb group [tacGroupName] [tacacs_vServer_Name]
A health monitor probe allows the NetScaler load balancer to monitor the PSN node availability. This is important so the NetScaler does not forward RADIUS or TACACS requests to a server that is offline/unavailable.
A response code of 2 (success) or 3 (failure) is accepted as a valid response the server is online.
The RADIUS monitor is only monitoring port 1812 (authentication) because if it is down we are not concerned with port 1813 (accounting). We do not want accounting packets to be sent to a server that is not available to authenticate so if authentication is down mark the server as unavailable.
The response timeout default is 2 seconds. This is too short so it is set here to 10 seconds. With 3 retries (30 seconds), the interval is set to 1 second greater at 31 seconds.
add lb monitor [radiusMonitorName] RADIUS -respCode 2-3 -userName [user] -password [userPassword] -radKey [radiusSharedSecret] -LRTM DISABLED -destPort 1812 -interval 31 -respTimeout 10
add lb monitor [tacacsMonitorName] TCP -LRTM DISABLED -destPort 49 -interval 31 -respTimeout 10
Note: It is recommended to suppress the username used in the health monitors within ISE. This will prevent an excessive number of failed/passed authentications from filling the logs
Bind the RADIUS and TACACS services (ports) to a service group.
bind serviceGroup [radiusAuth_serviceGroupName] [servername_radPSN1] 1812
bind serviceGroup [radiusAuth_serviceGroupName] [servername_radPSN2] 1812
bind serviceGroup [radiusAcct_serviceGroupName] [servername_radPSN1] 1813
bind serviceGroup [radiusAcct_serviceGroupName] [servername_radPSN2] 1813
bind serviceGroup [tacacs_serviceGroupName] [servername_tacPSN1] 49
bind serviceGroup [tacacs_serviceGroupName] [servername_tacPSN2] 49
Bind the health monitor that was created to the service group. This will monitor all servers in the group with a single entry per service group.
bind serviceGroup [radiusAuth_serviceGroupName] -monitorNamee [radiusMonitorName]
bind serviceGroup [tacacs_serviceGroupName] -monitorNamee [tacacsMonitorName]
A policy data set will allow grouping multiple PSN IP addresses into a single data set that can be used in the RNAT ACL.
add policy dataset [radDataSetName] ipv4
bind policy dataset [radDataSetName] [radPSN1_IP] -index 1
bind policy dataset [radDataSetName] [radPSN2_IP] -index 2
add policy dataset [tacDataSetName] ipv4
bind policy dataset [tacDataSetName] [tacPSN1_IP] -index 1
bind policy dataset [tacDataSetName] [tacPSN2_IP] -index 2
Reverse Network Address Translation (RNAT) allows traffic from a virtual server to appear to be originating from the virtual IP. This simplifies configuration on the NAD because RADIUS CoA (UDP/1700) can be configured using the VIP instead of each PSN IP.
Only RNAT for PSN to NAD is configured. Never configure RNAT from NAD to PSN because the ISE node needs to communicate directly to the NAD. RNAT from NAD to PSN would cause the source of the authentication/accounting to come from the load balancer.
add ns acl [coaACLname] ALLOW -srcIP = [radDataSetName] -destPort = 1700 -protocol UDP
add ns acl [tacACLname] ALLOW -srcIP = [tacDataSetName] -destPort = 49 -protocol TCP
apply ns acls
set rnat [coaACLname] -natIP [radiusVIPip]
set rnat [tacACLname] -natIP [tacacsVIPip]
add policy expression [radPolicyName] "CLIENT.UDP.RADIUS.ATTR_TYPE(31)"
add server [servername_radPSN1] [radPSN1_IP]
add server [servername_radPSN2] [radPSN2_IP]
add server [servername_tacPSN1] [tacPSN1_IP]
add server [servername_tacPSN2] [tacPSN2_IP]
add serviceGroup [radiusAuth_serviceGroupName] RADIUS -maxClient 0 -maxReq 0 -cip DISABLED -usip YES -useproxyport YES -cltTimeout 60 -svrTimeout 60 -CKA NO -TCPB NO -CMP NO
add serviceGroup [radiusAcct_serviceGroupName] RADIUS -maxClient 0 -maxReq 0 -cip DISABLED -usip YES -useproxyport YES -cltTimeout 60 -svrTimeout 60 -CKA NO -TCPB NO -CMP NO
add serviceGroup [tacacs_serviceGroupName] TCP -maxClient 0 -maxReq 0 -cip DISABLED -usip YES -useproxyport YES -cltTimeout 60 -svrTimeout 60 -CKA NO -TCPB NO -CMP NO
add lb vserver [radiusAuth_vServer_Name] RADIUS [radiusVIPip] 1812 -persistenceType RULE -rule [radPolicyName] -timeout 120 -persistenceBackup SOURCEIP -backupPersistenceTimeout 120
add lb vserver [radiusAcct_vServer_Name] RADIUS [radiusVIPip] 1813 -persistenceType RULE -rule [radPolicyName] -timeout 120 -persistenceBackup SOURCEIP -backupPersistenceTimeout 120
add lb vserver [tacacs_vServer_Name] TCP [tacacsVIPip] 49 -persistenceType SOURCEIP -timeout 5
bind lb vserver [radiusAuth_vServer_Name] [radiusAuth_serviceGroupName]
bind lb vserver [radiusAcct_vServer_Name] [radiusAcct_serviceGroupName]
bind lb vserver [tacacs_vServer_Name] [tacacs_serviceGroupName]
add lb group [radGroupName] -persistenceType RULE -rule [radPolicyName] -persistenceBackup SOURCEIP
add lb group [tacGroupName] -persistenceType SOURCEIP
bind lb group [radGroupName] [radiusAuth_vServer_Name]
bind lb group [radGroupName] [radiusAcct_vServer_Name]
bind lb group [tacGroupName] [tacacs_vServer_Name]
add lb monitor [radiusMonitorName] RADIUS -respCode 2-3 -userName [user] -password [userPassword] -radKey [radiusSharedSecret] -LRTM DISABLED -destPort 1812 -interval 31 -respTimeout 10
add lb monitor [tacacsMonitorName] TCP -LRTM DISABLED -destPort 49 -interval 31 -respTimeout 10
bind serviceGroup [radiusAuth_serviceGroupName] [servername_radPSN1] 1812
bind serviceGroup [radiusAuth_serviceGroupName] [servername_radPSN2] 1812
bind serviceGroup [radiusAcct_serviceGroupName] [servername_radPSN1] 1813
bind serviceGroup [radiusAcct_serviceGroupName] [servername_radPSN2] 1813
bind serviceGroup [tacacs_serviceGroupName] [servername_tacPSN1] 49
bind serviceGroup [tacacs_serviceGroupName] [servername_tacPSN2] 49
bind serviceGroup [radiusAuth_serviceGroupName] -monitorNamee [radiusMonitorName]
bind serviceGroup [tacacs_serviceGroupName] -monitorNamee [tacacsMonitorName]
add policy dataset [radDataSetName] ipv4
bind policy dataset [radDataSetName] [radPSN1_IP] -index 1
bind policy dataset [radDataSetName] [radPSN2_IP] -index 2
add policy dataset [tacDataSetName] ipv4
bind policy dataset [tacDataSetName] [tacPSN1_IP] -index 1
bind policy dataset [tacDataSetName] [tacPSN2_IP] -index 2
add ns acl [coaACLname] ALLOW -srcIP = [radDataSetName] -destPort = 1700 -protocol UDP
add ns acl [tacACLname] ALLOW -srcIP = [tacDataSetName] -destPort = 49 -protocol TCP
apply ns acls
set rnat [coaACLname] -natIP [radiusVIPip]
set rnat [tacACLname] -natIP [tacacsVIPip]
Hello.
What can we do on Cisco ISE 3.1 to complete the configuration? I mean, what kind of device template should we create on ISE in order to use the ISE as TACACS+ server on Citrix device?
Thanks.
Hi Brad,
in this configuration example, when we look at the overall design, what is the official recommendation for traffic routing?
Should loadbalancer be Default Gateway auf PSNs to ensure symmetric traffic flows or is there some kind of policy routing inbetween to send only radius, tacacs and CoA traffic back via loadbalancer and rest of psn traffic (ad, dns etc) via normal default gateway?
thank you in advance
great article, thank you
One item I've observed and have a case open with Citrix is that backup persistence with rule base persistence is not an option. config may take, it won't be applued
Hello Brad,
I have a question regarding the special scenario where Citrix Netscaler is not the default gateway.
In our scenario, the default gateway is a dedicated firewall and Netscaler just balances Radius requests.
All requests that are proxied, arrive at CiscoISE with a dedicated source ip address (a Netscaler VIP, since we have configured a dedicated Net Profile for the Radius VirtualServer).
The whole Radius flow works fine, but the problem is the CoA session. This session is originated by CiscoISE and, from the logs, is generated with:
SRC-IP: CiscoISE ip
DST-IP: Netscaler VIP ip
Therefore, when Netscaler receives this CoA packet, it does not know where to forward it.
Since the CoA packet contains the NAS-IP address, which is the correct destination ip, I assume that Netscaler should extract this information and forward the CoA packet to the extracted NAS-IP.
Do you have any advices or examples of how to get this scenario on Netscaler?
Thank you in advance.
Hi
Did you hear back from Citrix support regarding backup persistence for rule based persistency?
I'm using persistency groups to share persistence between RADIUS authentication and accounting VIPs.
When I add the following from the cli:
add lb group ISE-PERSISTENCY -persistenceType RULE -persistenceBackup SOURCEIP -backupPersistenceTimeout 120 -timeout 120 -rule "CLIENT.UDP.RADIUS.ATTR_TYPE(31)"
It appears in the GUI with backup persistence set to none:
when the persistency group is bound to the virtual servers, it looks correct:
Persistence works for the most part but I can't see the SOURCEIP sessions listed on the "Virtual Server persistence sessions" page.
Regular RADIUS auth/acct works fine but occasionally Cisco TrustSec traffic (which would use source IP) acts strangely so something isn't quite right.
Cheers
Andy
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: