I've been trying to get the new Catalyst controller virtual to work and trying to learn the basics of it also.
However I'd like to ask about the best practices in setting up the thing and if there are clues in the running config of a physical appliance that helps.
The thing is about the interfaces:
- Do you actually need to define 3 interfaces in vmware?
- Is there a reason why the default and the cisco examples leave Gi1 as oob management interface in the global routing table instead of a separate VRF? And how is this on physical appliances?
So in my first try I bootstrapped the controller using the following
GI1 = OOB interface with IP directly on the interface and 0.0.0.0/0 route off this interface (VM network VLAN 10)
GI2 = Trunk interface with VLAN 20 as wireless mgmt with IP tagged and no routes (VM network VLAN 4095 all)
GI3 = HA SSO interface disabled because unused (VM network VLAN 50)
And the security features on vmware all accept.
In this setup I could play a bit but I never managed to ping from the VLAN 20 SVI to devices on the same subnet, inside the ESX host or on the LAN. It did grab the mac addresses both ways as I could see in the ARP tables of devices on both ends.
I then f*ed up a bit by deleting the trust point to recreate it. I couldn't use https to the controller anymore and had to do no http-secure server to regain GUI access on port 80. I never got the ping to work though :/
So today I tried again with similar settings in ESX. However I skipped the day0 and went full cli from the start.
GI1 = routed OOB interface but on management vrf with a default route.
GI2 = trunk carrying a management VLAN and SVI on the global routing table with it's own default route.
GI3 = HA SSO but disabled once again
This time at first I still couldn't ping but I noticed VMWARE had switched interface 2 and 3, so after fixing that I could ping from both vrf's. I even got an AP connected. However without licensing on this machine I won't get it to work completely but I got very far. I also noticed you could manage the appliance fully using the wireless mgmt interface.
So once again, best practices on a virtual controller.
Do you use separate interfaces or stick both OOB and Wireless mgmt on the same GI interface but different VLAN?
Another question, since you don't need IP's on your VLANs. When you do add an IP to an interface, can you put an IP helper on that interface to forward DHCP out the wireless management interface or another one closer to the DHCP server?
I am running into some of the same issues as you, though I only have a GigabitEthernet1 "physical" interface. I followed this guide to set it up (https://www.cisco.com/c/en/us/td/docs/wireless/controller/9800/9800-cloud/installation/b-c9800-cl-install-guide/booting_the_controller_and_accessing_the_console.html) using the CLI. As soon as I add an SVI and enable it, I lose access to the management interface. As soon as I shut that VLAN interface, the management interface is accessible again. We have two 9800-40 controllers on order and I am also curious if I will run into this on the physical hardware.
I am confused about interfaces, too. We have two 9800CL in a cluster running on kvm. One interface serves for management access, another is trunk with the wireless management vlan and the third is used for HA.
- Does the order of the interfaces matter? In my demployment G2 is for HA as the hypervisor interchanged G2 and G3.
- What is the purpose of the wireless management interface? I played around and set the management interface as trustpoint. Access Points are able to join. As we do not switch centrally we do not need other Vlans than the one for management on the Controllers. What is the reason for adding a seperate wireless management interface?
Regarding the following post https://community.cisco.com/t5/wireless-and-mobility/9800-cl-network-configuration/td-p/3846079 it seems to work without issues. So I'll give it a try with only one interface serving as trustpoint for access points and for device management.