I am trying to create a vManage cluster on different subnet using version 20.6.3 on esxi (I know it is not recommended, however, that is how it was requested. I even got it to work in my lab on eve-ng using the same version).
The issues I am facing are:
- When I configure vManage1 and vManage2 eth0 and eth1 interfaces with 1 default route on vManage1 and two default routes on vManage2, I can ping eth0 vManage2 and eth1 vManage2 from vManage1 using ping x.x.x.x source eth0 and ping x.x.x.x source eth1 however, if I assign another default route to vManage1 for eth1 interface I lose all communication. I also lose access to vManage1 web GUI (when I run request nms application-server status, it is showing running for a long period: longer than 10 mins)
Example configurations:
vmanage1: eth0 ip address: 10.128.45.x/24
eth1 ip address: 10.128.4.x/24
default route 0.0.0.0/0 10.128.4.1
default route 0.0.0.0/0 10.128.45.1
vManage2: eth0 ip address: 10.129.45.x/24
eth1 ip address: 10.129.4.x/24
default route 0.0.0.0/0 10.128.4.1
default route 0.0.0.0/0 10.128.45
I am hoping someone can help resolve this issue.
PS: I created test windows machine and applied the same IP addresses used for vManage and they worked.