cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Announcements
4754
Views
35
Helpful
12
Replies

Expressway - Medium OVA to Large OVA

experts,

 

If we have Cisco Expressway C and E in medium OVA, can we change it to large OVA without reinstall and rebuild ?

 

Is there any issue if we do that?

 

tks,

K

1 ACCEPTED SOLUTION

Accepted Solutions

Since you are running version 8.9 per my previous post do you have 10 GB connection to this VM?

If not then you either need to upgrade or add 10 GB connection to go up to Large OVA.

Also, compare the footprint of 1 Large OVA to 2 or however many Medium you would need as Large requires 8 vCPUs and medium only 2. Clustering expressways is simple, so that is my usual approach.

View solution in original post

12 REPLIES 12
Rajan
Collaborator

Hi K,

Whats the version you are running ? If its X 8.10/8.11, then you can simply power off the VM, increase vRAM and vCPU and power on. Rebuild and reinstall wont be required as long as there is no change in the disks and I do not see any change in vDisks between medium and large OVAs for expressway servers.

For x8.7 and older, you need to look after this as well. "Large VM configuration requires 10Gb NIC and the host CPU must support the AES-NI instruction set (and it must not be masked by ESXi)."

https://www.cisco.com/c/dam/en/us/td/docs/voice_ip_comm/uc_system/virtualization/virtualization-cisco-expressway.html#noteVM

HTH

Rajan
Pls rate all helpful posts by clicking the star below

Chris Deren
Hall of Fame Master

The question is why are you planning on doing it and is it worth it rather than having multiple Medium VMs? And in case you are not aware Expressway 8.9 and below requires 10 GB NIC, this requirement was removed in version 8.10, so make sure that is not an issue for you.

hi Chris,

 

I am on version 8.9.1 , will add multiple medium can cover till 2,500 users in Jabber MRA?

 

K

There is no difference between medium and large for MRA sizing, the only difference is for B2B calls. You can have up to 2500 MRA sessions per node and you can have 6 Expressways in a none (6 Es and 6 Cs), check out this document for sizing:

 

https://www.cisco.com/c/en/us/td/docs/solutions/CVD/Collaboration/enterprise/collbcvd/sizing.html

"Cisco Expressway Sizing" section

 Thanks Chris,

 

I see medium only support 300 calls , we may need more than that.

 

which one easier ?

 

- add cluster to existing one  or  increase OVA level to have less cluster ?

 

K

Since you are running version 8.9 per my previous post do you have 10 GB connection to this VM?

If not then you either need to upgrade or add 10 GB connection to go up to Large OVA.

Also, compare the footprint of 1 Large OVA to 2 or however many Medium you would need as Large requires 8 vCPUs and medium only 2. Clustering expressways is simple, so that is my usual approach.

View solution in original post

Thanks Chris,

 

So still cheaper and easier to go with Medium OVA then.

hi Rajan,

 

I am using 8.9.1 of Expressway, when i see for Large OVA  , it need 10GB NIC

 

https://www.cisco.com/c/dam/en/us/td/docs/voice_ip_comm/expressway/install_guide/Cisco-Expressway-Virtual-Machine-Install-Guide-X8-9-1.pdf

 

Do you know if I still need to reinstal or any other easier solution ?

 

K

 

 

"If its X 8.10/8.11, then you can simply power off the VM, increase vRAM and vCPU and power on. Rebuild and reinstall wont be required as long as there is no change in the disks and I do not see any change in vDisks between medium and large OVAs for expressway servers."

Are you certain about this? I find that System->Information does indeed report as "Large", but "xstatus hardware" continues to show the old caps of RegistrationLimit 3750 and TraversalcallsLimit of 150:
TANDBERG Video Communication Server X12.5.7
SW Release date: 2020-02-03 11:35, build

OK

xstatus hardware
*s Hardware: /
CoreAffinity:
CpuSpeed: "2596.992"
Flags: "fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon nopl tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt aes xsave avx hypervisor lahf_lm pti arat"
MemTotal: "8159628 kB"
ModelName: "Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz"
NetworkAdapters: "[\"1000baseT/Full\", \"10000baseT/Full\"]"
NontraversalcallsLimit: "750"
NumLogicalCores: 8
NumMediaForwardingFrameworks: "1"
NumPhysicalCpus: 8
PhysicalToLogicalCoreMapping: "{\"cpu\": {\"0\": {\"core\": {\"0\": {\"logical\": [\"0\"]}}}, \"10\": {\"core\": {\"0\": {\"logical\": [\"5\"]}}}, \"12\": {\"core\": {\"0\": {\"logical\": [\"6\"]}}}, \"14\": {\"core\": {\"0\": {\"logical\": [\"7\"]}}}, \"2\": {\"core\": {\"0\": {\"logical\": [\"1\"]}}}, \"4\": {\"core\": {\"0\": {\"logical\": [\"2\"]}}}, \"6\": {\"core\": {\"0\": {\"logical\": [\"3\"]}}}, \"8\": {\"core\": {\"0\": {\"logical\": [\"4\"]}}}}}"
ProfileName: "profile1"
RegistrationsLimit: "3750"
TraversalcallsLimit: "150"
TurnrelaysLimit: "1800"
*s/end

The kicker seems to be "NumMediaForwardingFrameworks" above; it's 1 on this system, while it's 6 on Expressways that were deployed as Large from the get-go. This appears to be a configuration directive in clusterdb ("/configuration/systemScale") but I can't figure out how to get write access to clusterdb in order to change it from "profile1" to "profile2".

Would welcome any insight on how this could be done, as this covid-19 thing has everyone ramping up MRA usage aggressively.
Jaime Valencia
Hall of Fame Cisco Employee

Expressways do NOT support resizing, see here:

https://www.cisco.com/c/dam/en/us/td/docs/voice_ip_comm/uc_system/virtualization/virtualization-software-requirements.html#VMwareFeature_UC

 

Also the ports that are used for various of the features vary depending on the OVA you choose, and as you noticed, some other parameters are also different.

HTH

java

if this helps, please rate

Ok, I figured this out. The key was to use the management API, instead of trying to monkey around in clusterdb directly. I confess I half-cheated; by searching BST for 'systemScale', I stumbled across the specifics in the workaround for this bug:
* https://bst.cloudapps.cisco.com/bugsearch/bug/CSCvs59766

In short, I changed the same system you see above (still with 8 vCPU) from profile1 to profile2, rebooted, and now here's my xstatus hardware:
xstatus hardware
*s Hardware: /
CoreAffinity:
CpuSpeed: "2596.992"
Flags: "fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc arch_perfmon nopl tsc_reliable nonstop_tsc cpuid pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt aes xsave avx hypervisor lahf_lm pti arat"
MemTotal: "8159628 kB"
ModelName: "Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz"
NetworkAdapters: "[\"1000baseT/Full\", \"10000baseT/Full\"]"
NontraversalcallsLimit: "750"
NumLogicalCores: 8
NumMediaForwardingFrameworks: "6"
NumPhysicalCpus: 8
PhysicalToLogicalCoreMapping: "{\"cpu\": {\"0\": {\"core\": {\"0\": {\"logical\": [\"0\"]}}}, \"10\": {\"core\": {\"0\": {\"logical\": [\"5\"]}}}, \"12\": {\"core\": {\"0\": {\"logical\": [\"6\"]}}}, \"14\": {\"core\": {\"0\": {\"logical\": [\"7\"]}}}, \"2\": {\"core\": {\"0\": {\"logical\": [\"1\"]}}}, \"4\": {\"core\": {\"0\": {\"logical\": [\"2\"]}}}, \"6\": {\"core\": {\"0\": {\"logical\": [\"3\"]}}}, \"8\": {\"core\": {\"0\": {\"logical\": [\"4\"]}}}}}"
ProfileName: "profile2"
RegistrationsLimit: "5000"
TraversalcallsLimit: "500"
TurnrelaysLimit: "6000"
*s/end

Note that this did automatically change the traversal ports (as expected on large VM templates), so be ready for that firewall-wise before you implement a change like this.

This research effort was undertaken solely to enhance my ability to more rapidly grow the remote teleworker capability of my customers already-in-place Expressway deployments. Pre-COVID-19, everyone had Expressway but only a handful of my customers really moved any significant amount of traffic through them. Post-COVID-19, 300 calls on 5000 registrations (2 node clusters) suddenly wasn’t enough. This methodology presents a quick way to scale simultaneous call support by over 3x without too much fuss.

I want to take this time to thank the doctors, nurses, and other medical professionals who are out there on the front lines fighting this pandemic all over the world. I don't have what it takes to do what you do, so instead I hide in my office feeding Expressway binaries through IDA pro in an attempt to figure out the above because I'm too time-strapped (or am I too lazy?) to rebuild and relicense them all. Thanks guys.

Every time I need to do this, I forget all the cURL command line options to authenticate, POST, and send along the necessary POST body.  So with that in mind, I'm documenting it here for myself & anyone else who needs it:

 

CHANGE TO MEDIUM

curl -u username:password -d "profile_name=profile1" -X POST https://wccinvcscon1.wc.cbtscom.com/api/management/configuration/systemscale

 

CHANGE TO LARGE

curl -u username:password -d "profile_name=profile2" -X POST https://wccinvcscon1.wc.cbtscom.com/api/management/configuration/systemscale

Create
Recognize Your Peers
Content for Community-Ad