cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3158
Views
0
Helpful
11
Replies

TMS REDUNDANCY

KRISHNA K V
Level 1
Level 1

Hi,

One of our customer is having a main site and DR site for Video conference. They are planning to implement redundant VC Infrastrcuture componnents like VCS, Conductor, MCU and TMS.

I need your expert opinion in deploying TMS in Main Site and DR Site scenario.

Main Site and DR Site is in a totally different network. We can't  extend the main Site IP Address to the DR site. Because of this Redundant Componets will have IP Address from a different subnet.

TMS Server A: 10.10.10.10

TMS Server B in DR Site : 10.20.20.20

DNS A record for TMS server is tms.example.com --> 10.10.10.10

I checked the various possible options for TMS Redundancy as mentioned in the link.

http://www.cisco.com/en/US/docs/telepresence/infrastructure/tms/config_guide/Cisco_TMS_Server_Redundancy_Config_Guide_13-0.pdf

The Fully redundant option use Load Balancer concepts.

I have the following questions.

1. Could you please let me know which load balancer can be used for TMS Redundancy solution considering the IP Address of the TMS are in different subnets.

2. In  Multiple Servers, Manual Cutover option, if the primary server fails shall we use 10.20.20.20 IP Address as Cisco TMS Server Local IPv4 Address and Update the DNS record to point to 10.20.20.20.

Krishna.


1 Accepted Solution

Accepted Solutions

Option A

I wouldn't recommend a DNS A record pointing to both...since this is basically a round robin...meaning if DC 1 went down, you won't get a lookup on the second one, it will just fail. Therefore, what I recommend you do is just enter a DNS A record for the DC 1 TMS and if you get a failure on DC 1, then simply go into DNS and update the IP address for that record to the DC 2 TMS.

Option B

No, you don't want to do this, i.e. add the EPs to both TMS's initially. You only want to add the systems to the DC 1 TMS...meaning one TMS db to 'rule them all'. And this is why your 'async mirroring' the TMS db from DC 1 SQL to DC 2 SQL. And to clarify 'async mirrroring in SQL', when you set up 'async mirroring' in SQL, one db is 'active' while the other is 'passive'. This is why on the DC 2 TMS all TMS services need to disabled since it can't connect to the TMS SQL db (tmsng) that is passive. And with 'async mirroring' on and depending how often you 'mirror' them, the databases will effectively be the same.

At the end of the day, the deployment isn't that large (i.e. 20 systems) so all of this may seem like 'overkill' but it is the approach to 2 x DC setup...unless of course you virtualize everything and go Nick's route, i.e. OTV and strecthed cluster.

rgds,

Dale

View solution in original post

11 Replies 11

daleritc
Cisco Employee
Cisco Employee

Hi Krishna,

Your looking at a dated document. You should be looking at the latest and greatest Cisco TMS Admin Guide:

http://www.cisco.com/en/US/docs/telepresence/infrastructure/tms/admin_guide/Cisco_TMS_Admin_Guide_14-3.pdf

Starting on page 295 - Redundant Deployment section.

And the information on what is provided here is specifically what we support

rgds,

Dale

Dear Dale,

Thanks for your information. I checked the redundant deployment section.

Only two model are supported.

1. Load Balancer

2. Deploying as Hot  Standby.

In the loadbalancer section they have specified both the TMS server in same site. In my scenario, i have two different site with TMS in each site. My question is

Is any load balancer work between the sites ?

In the Deploying as Hot standby section, it is mentioned to change the IP Address of the secondary TMS server to the ip address of the primary tms server.

Instead of doing this ip address change  if we udpate the DNS entry  to the secondary tms server ip address it  should work.   My question is  will this DNS udpate model is supported or not.

Krishna.

Will any loadbalancer will work between the sites.

Your other option would be to use a network technology such as OTV to extend the subnet over both data centers.  This way you can run a stretched VM cluster between both data centers for physical redundancy while maintaining a single logical TMS server.

@Nick: Maybe possible but not supported nor tested What's been tested and verified is what we've documented.

@Krishna: In the case of two DCs, you could do a combination of the two (i.e. NLB setup and hot standby setup), meaning LB could sit out in front of both DCs with each DC 'housing' it's own TMS but the LB only pointing to one DC (TMS) at a time, meaning the TMS server at DC 2 (and SQL server, if external SQL is being used) would be completely inactive, i.e. all services off. When and if you get a failure on the TMS in DC 1, you'd then need to cutover to the TMS to  DC 2. However, there is obviously going to be some manual steps you'll need to do so has to do that, if and when that happend. For example, LB tweak to point to DC 2. And DNS can do this but keep in mind how long it may take for your DNS servers to update. Another thing to keep in mind in this 'hybrid' model is that you will want to regularly backup your TMS db (and appropriate files) to the secondary DC.

rgds,

Dale

Re: the "not supported or tested" thing, TMS supports VM and VM supports stretched cluster.

OTV is a way of providing stretched cluster when you have pesky layer 3 in the way - it's not *really* going outside the design.

FYI I've been running it for about 65 months, no issues.

Just being honest Nick   Meaning I know it's not something TMS engineering have tested in, i.e. OTV and strecthed cluster. However, good to know that it works ok

And would/could be easier than what I'm suggesting to Krishna, if Krishna has the VM environment to do so.

Cheers,

Dale

Dear Dale,

You are suggesting to keep Two TMS server sharing the same external sql database and keep the secondary tms server to be inactive. is my understanding right?

is there a way to avoid the load balancer and keep it simple with manual intervention by having two tms servers in  a cluster and creating two templates for each devices . when the main site is down we will change the ip address of the tms in the database pointing to the secondary tms server and pushing the new config to the endpoints.

Krishna. 

Hi Krishna!

More or less this is what the hot/warm standby method describes that you find in the admin guide:

http://www.cisco.com/en/US/docs/telepresence/infrastructure/tms/admin_guide/Cisco_TMS_Admin_Guide_14-3.pdf

Btw, using a DNS entry with a lower TTL can also help instead of switching ips as mentioned in the guide.

Be aware that different TMS versions have to be handled differently, especially regards the

legacy provisioning model with opends it can get complicated.

Upgrading to the latest is recommended.

Please remember to rate helpful responses and identify helpful or correct answers.

Please remember to rate helpful responses and identify

No, that is not what I'm saying. What I'm saying is to have two complete seperate (and equal) TMS set ups in each DC...meaning one DC is 'active' and the other is 'passive'. So think of the DCs as you would a Hot Standby setup.

In this setup, DC 1 would have it's own TMS and SQL server and be the active site. DC 2 would also have it's own TMS and SQL Server...completely equal to DC 1...meaning same type of server specs, OS, TMS and SQL versions, etc. However, the TMS and SQL servers in DC 2 would be completely passive...meaning all TMS related services and IIS would be off.

You can then do asynchronous mirroring of the db between the two DCs. However, the db at the secondary DC isn’t active in asynchronous mirroring so you can’t connect to it…meaning this is why you can’t have the TMS related services on the TMS in the secondary DC enabled. In addition, there is also the chance that data will be lost in the case of a catastrophic failure in the principal database server. Alternativly, you can use log shipping of DB instead of asynchronous mirroring between the data centers to have the flexibility to store the DB in an alternate location as well. There shouldn’t be any drawbacks to this other than the db will be only as good/updated as often as they log ship to the secondary DC.

Also keep in mind the files that need to be copied over to the passive TMS server, as spelled out in the documentation concerning the Hot Standby model.

So in conclusion, and again, think of the 2 DCs as you would the Hot Standby model but in two different locations. And in this type of setup with only one TMS at each DC, the LB actually isn't necessary unless of course you plan to utilize 2 x TMSs in each DC.

And if and when a failure occurs at DC 1, then the manual part is to have to ensure the db (whether your mirroring or log shipping) is in an active state and ready to go at DC 2 as well as start the TMS services in the secondary DC TMS. If LBs are in play, then obviously manual tweaks would need to be made there as well. And I would also recommend DNS over IP changes.

cheers,

Dale

Dear Dale,

i had taken you inputs and  drafted the following. Please let me know if my understanding is right. The customer deployment  has only 20 endpoints.

Option A

DC1 - Will have TMS Server with Local Sql Database. (tms1.domain.local)  - IP- 10.10.10.20

DC2-  Will have TMS Server with Local Sql Database. (tms2.domain.local)  - IP- 20.20.20.10

Asynchronous mirroring between TMS Sql Database or Log Shipping between sql database.

Synchronizing local files between the TMS Server

DNS A record created for tms.domain.local pointing to 10.10.10.20 as primary and 20.20.20.10 as secondary

During Failure of TMS in DC1 Check the status of the DB in DC2

Start the service in TMS.

Option B

Is there a way i add the endpoints in both the tms server inititally and stop  the tms servers service in DC2 Server. I dont foresee any changes in the  database once the Video endpoints and infrastrucre is configured. when the DC1 is down all we need to do is start the service in DC2 and push the confiuration templates to the endpionts.

Krishna.

Option A

I wouldn't recommend a DNS A record pointing to both...since this is basically a round robin...meaning if DC 1 went down, you won't get a lookup on the second one, it will just fail. Therefore, what I recommend you do is just enter a DNS A record for the DC 1 TMS and if you get a failure on DC 1, then simply go into DNS and update the IP address for that record to the DC 2 TMS.

Option B

No, you don't want to do this, i.e. add the EPs to both TMS's initially. You only want to add the systems to the DC 1 TMS...meaning one TMS db to 'rule them all'. And this is why your 'async mirroring' the TMS db from DC 1 SQL to DC 2 SQL. And to clarify 'async mirrroring in SQL', when you set up 'async mirroring' in SQL, one db is 'active' while the other is 'passive'. This is why on the DC 2 TMS all TMS services need to disabled since it can't connect to the TMS SQL db (tmsng) that is passive. And with 'async mirroring' on and depending how often you 'mirror' them, the databases will effectively be the same.

At the end of the day, the deployment isn't that large (i.e. 20 systems) so all of this may seem like 'overkill' but it is the approach to 2 x DC setup...unless of course you virtualize everything and go Nick's route, i.e. OTV and strecthed cluster.

rgds,

Dale