cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
425
Views
3
Helpful
5
Replies

understand the background processes/configurations that run on fabric

shahi22
Level 1
Level 1

Why Does attach <leaf> Open Bash Shell Instead of NX-OS CLI in Cisco ACI Simulator? Also, How Does the Fabric Configure Itself in Background?

Hello ^-^,

I am using the Cisco ACI Simulator (version 6.x). After deploying the APIC and registering the pending leaf and spine devices, the APIC discovers and connects them, and they become part of the fabric.

When I try to access a leaf switch CLI using the command:

attach leaf-101

I get logged into a bash shell on the leaf, not the familiar NX-OS CLI with commands like show run or VXLAN configurations.

 

My questions are:

  1. Why does the attach command open a Linux bash shell instead of the NX-OS CLI on the leaf switches in ACI mode?

  2. Is this behavior specific to the Simulator environment or does it apply to physical ACI leaf/spine switches as well?

  3. Where and how can I view the VXLAN and other network configuration information that I would typically find in NX-OS?

  4. What are the best methods or commands to inspect switch status and configurations in this bash shell

  5. Additionally, I would like to understand what background processes and configuration tasks run automatically on the leaf and spine switches when they join the fabric — how does the fabric “self-configure” from the moment the APIC discovers and registers these devices?

shahi22_0-1753486492770.png

i do lab with APIC simulator ver 6.0(2h).

 

 

2 Accepted Solutions

Accepted Solutions

RedNectar
VIP Alumni
VIP Alumni

Hi @shahi22 ,

OK - a bunch of questions. Let's see if I can give you a bunch of answers.

My questions are:

  1. Why does the attach command open a Linux bash shell instead of the NX-OS CLI on the leaf switches in ACI mode?

First and foremost, remember that although this is a simulator, it is not a 100% representation of a live ACI environment. Some would argue that it should be called an emulator rather than a simulator, but in truth, it is a bit of both. Some parts are emulated to behave like ACI, some parts run exactly the same code as an APIC.

So when you first run the ACI Simulator, you start a VM that has multiple containers that do the various jobs of simulating an APIC, simulating a Spine switch, simulating two Leaf switches and possible other processes as well. 

Now when it comes to the base operating system for each device, you'll find it is a version of linux in every case, including the Nexus 9000 switches. But with real hardware, the bash shell is either replaced with something that looks like a traditional Cisco Nexus CLI (APIC), or modified to look like the traditional Cisco Nexus CLI (switches).  [Note: I'm pretty sure I've told you the correct story, but happy to be corrected if anyone else knows better]

But at the end of the day, you are running linux, so don't get too upset if you see a BASH shell rather than a CLI prompt. Remember that this simulated leaf switch is not running the FULL switch code - it can't because all the switch hardware is non-existent - so if in the modification of the code to run on the simulator, the NXOS CLI got lost, it's not a big deal.  It just means that you can't run any CLI commands on a simulated leaf, but the APIC simulation is not too bad.

BTW - don't use the attach command - use ssh instead. If you DO use attach you'll see a message saying that the command is being deprecated:

apic1# attach Leaf101
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
# Executing command: ssh Leaf101 -b 10.0.0.1
Warning: Permanently added 'leaf101' (RSA) to the list of known hosts.

admin@leaf101's password:
admin@leaf1:~>
  1. Is this behavior specific to the Simulator environment or does it apply to physical ACI leaf/spine switches as well?

No, when using real hardware the ssh command looks like this. Just for purity of comparison I've used attach in the example below on real hardware. You can see that it is different from the password prompt onwards.

apic1# attach Leaf1201
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
# Executing command: ssh Leaf1201 -b 10.1.0.1
Housley Fabric#1 ACI Lab
(admin@leaf1201) Password:
Last login: Sat Jul 26 12:41:46 2025 from 10.1.0.1
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2025, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
Leaf1201#
  1. Where and how can I view the VXLAN and other network configuration information that I would typically find in NX-OS?

Wow. This is digging really deep. And my first answer is

You don't need to view the VXLAN and other network configuration information that you would typically find in NX-OS - at least as far as the underlay is concerned.

and, honestly, that's probably where I should leave it.  If you understand how ACI actually works, you'll realise that you'll probably never need to know these details, so begin by spending an hour or two digesting this BRKACI-3101 Cisco Live presentation before continuing.

But for looking at tenant IP configurations and the like, you can continue using many of the show commands you are used to.  If I get time later I'll edit this post and add some examples.

[Later] Here's a couple of examples. Note that you can direct commands to leaf switches from the APIC without having to ssh to each leaf by using the fabric command.

Lets' start with some obvious ones:

apic1# show running-config
# Command: show running-config
# Time: Sat Jul 26 04:51:48 2025
aaa banner 'Application Policy Infrastructure Controller'

<snip a few hundred lines>

apic1# show running-config tenant Tenant01 # Show the config for a particular tenant - REAL HW only # Command: show running-config tenant Tenant01 # Time: Sat Jul 26 05:00:24 2025 tenant Tenant01 access-list AppServices_Fltr match raw TCP5000 dFromPort 5000 dToPort 5000 etherT ip prot 6 stateful yes exit access-list HTTPS_Fltr match tcp dest 443 exit <snip a few lines> apic1# show running-config leaf 2201 # Show the config for a particular leaf - REAL HW only # Command: show running-config leaf 2201 # Time: Sat Jul 26 05:07:58 2025 leaf 2201 template hsrp group-policy default tenant common exit vrf context tenant Tenant01 vrf Production_VRF l3out ProductionOSPF_L3Out router-id 10.201.0.201 route-map ProductionOSPF_L3Out_in scope global exit route-map ProductionOSPF_L3Out_out scope global match bridge-domain Web_BD exit exit route-map ProductionOSPF_L3Out_shared scope global ip prefix-list 10.201.10.0:24_L3EPG permit 10.201.10.0/24 match prefix-list 10.201.10.0:24_L3EPG contract consumer MgmtServices_Ct contract consumer WebServices_Ct exit exit exit

<snip many lines>

Now a couple of more useful troubleshooting commands - DO NOT expect these to work on the simulator because the simulator does not have the actual switching hardware

apic1# fabric 1201,1202 show ip route vrf  Tenant01:Production_VRF # This command tells the APIC to log into leaf 
# 1201 and 1202, then issue the command show ip route vrf Tenant01:Production_VRF on each leaf, which
# will list the routing table on each leaf for the VRF called Production_VRF for the tenant called Tenant01 ---------------------------------------------------------------- Node 1201 (Leaf1201) ---------------------------------------------------------------- IP Route Table for VRF "Tenant01:Production_VRF" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%' in via output denotes VRF 1.1.1.1/32, ubest/mbest: 1/0 *via 10.101.1.1, vlan74, [110/5], 23:27:16, ospf-default, intra 10.100.0.5/32, ubest/mbest: 1/0, attached, direct, pervasive *via 10.1.8.67%overlay-1, [1/0], 23:27:32, static, tag 4294967292, rwVnid: vxlan-2883585 10.101.0.201/32, ubest/mbest: 2/0, attached, direct *via 10.101.0.201, lo10, [0/0], 23:27:32, direct *via 10.101.0.201, lo10, [0/0], 23:27:32, local, local 10.101.1.0/24, ubest/mbest: 1/0, attached, direct *via 10.101.1.201, vlan74, [0/0], 23:27:32, direct 10.101.1.201/32, ubest/mbest: 1/0, attached *via 10.101.1.201, vlan74, [0/0], 23:27:33, local, local 10.101.10.0/24, ubest/mbest: 1/0 *via 10.101.1.1, vlan74, [110/44], 23:27:17, ospf-default, intra 10.101.11.0/24, ubest/mbest: 1/0, attached, direct, pervasive *via 10.1.8.67%overlay-1, [1/0], 21:26:01, static 10.101.11.1/32, ubest/mbest: 1/0, attached, pervasive *via 10.101.11.1, vlan43, [0/0], 2d00h, local, local 10.101.12.0/24, ubest/mbest: 1/0, attached, direct, pervasive *via 10.1.8.67%overlay-1, [1/0], 22:45:42, static 10.101.12.1/32, ubest/mbest: 1/0, attached, pervasive *via 10.101.12.1, vlan69, [0/0], 1d01h, local, local ---------------------------------------------------------------- Node 1202 (Leaf1202) ---------------------------------------------------------------- IP Route Table for VRF "Tenant01:Production_VRF" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%' in via output denotes VRF 1.1.1.1/32, ubest/mbest: 1/0 *via 10.1.112.64%overlay-1, [200/5], 23:27:19, bgp-65001, internal, tag 65001 10.100.0.5/32, ubest/mbest: 1/0, attached, direct, pervasive *via 10.1.8.67%overlay-1, [1/0], 1d21h, static, rwVnid: vxlan-2883585 10.101.0.201/32, ubest/mbest: 1/0 *via 10.1.112.64%overlay-1, [1/0], 23:27:34, bgp-65001, internal, tag 65001 10.101.1.0/24, ubest/mbest: 1/0 *via 10.1.112.64%overlay-1, [200/0], 23:27:34, bgp-65001, internal, tag 65001 10.101.10.0/24, ubest/mbest: 1/0 *via 10.1.112.64%overlay-1, [200/44], 23:27:19, bgp-65001, internal, tag 65001 10.101.11.0/24, ubest/mbest: 1/0, attached, direct, pervasive *via 10.1.8.67%overlay-1, [1/0], 21:26:03, static 10.101.11.1/32, ubest/mbest: 1/0, attached, pervasive *via 10.101.11.1, vlan33, [0/0], 1d02h, local, local 10.101.12.0/24, ubest/mbest: 1/0, attached, direct, pervasive *via 10.1.8.67%overlay-1, [1/0], 22:45:43, static 10.101.12.1/32, ubest/mbest: 1/0, attached, pervasive *via 10.101.12.1, vlan11, [0/0], 2d00h, local, local


apic1# fabric 1201 show ip interface brief vrf Tenant01:Production_VRF # An old favourite: show ip interface brief, but
# only for leaf 1201 and for the Production_VRF in Tenant01 ---------------------------------------------------------------- Node 1201 (Leaf1201) ---------------------------------------------------------------- IP Interface Status for VRF "Tenant01:Production_VRF"(49) Interface Address Interface Status vlan43 10.101.11.1/24 protocol-up/link-up/admin-up vlan69 10.101.12.1/24 protocol-up/link-up/admin-up vlan74 10.101.1.201/24 protocol-up/link-up/admin-up lo10 10.101.0.201/32 protocol-up/link-up/admin-up

apic1# fabric 1201 show vlan extended | grep Tenant01 # The show vlan extended command doesn't have any parameters, so I've used grep to filter out just the relevant details
# Note how you can use this command to see the mappings of the internal VLANs to the user configured Bridge Domains
43 Tenant01:App_BD vxlan-16646017 Eth1/11, Eth1/31, Po3 44 Tenant01:2Tier_AP:AppServers_EPG vlan-1011 Eth1/11 67 Tenant01:2Tier_AP:AppServers_EPG vlan-1013 Eth1/11, Eth1/31, Po3 69 Tenant01:Web_BD vxlan-15859679 Eth1/31, Po3 70 Tenant01:2Tier_AP:WebServers_EPG vlan-1014 Eth1/31, Po3 74 Tenant01:Production_VRF:l3out- vxlan-15237059, Eth1/10

  1. What are the best methods or commands to inspect switch status and configurations in this bash shell

Here are a couple of commands that you might use. Take note of the prompt to determine if the command is issued at the APIC or at a leaf switch - and don't expect all these to work on the simulator.

apic1# avread
Cluster:
-------------------------------------------------------------------------
operSize                1
clusterSize             1
fabricDomainName        ACI Fabric1
version                 apic-6.0(9d)
discoveryMode           PERMISSIVE
drrMode                 OFF
kafkaMode               ON
autoUpgradeMode         OFF

APICs:
-------------------------------------------------------------------------
                    APIC 1
version           6.0(9d)
address           10.1.0.1
oobAddress        172.16.11.2/24
oobAddressV6      fc00::1/7
routableAddress   0.0.0.0
tepAddress        10.1.0.0/16
podId             1
chassisId         e9de02f6-.-313c72c6
cntrlSbst_serial  (APPROVED,WZP23290G96)
active            YES
flags             cra-
health            255

apic1# fnvread
        id               address  disabled    active  occupied permanent              model  nodeRole  nodeType  fabricId     podId
-----------------------------------------------------------------------------------------------------------------------------------------------
   1101(1)     10.1.112.65/32(1)     NO(1)    YES(0)  YES(178)    YES(1)      N9K-C9332C(1)      3(1)      0(1)      1(1)      1(1)
   1201(1)     10.1.112.64/32(1)     NO(1)    YES(0)   YES(77)    YES(1) N9K-C93180YC-FX(1)      2(1)      0(1)      1(1)      1(1)
   1202(1)     10.1.112.66/32(1)     NO(1)    YES(0)   YES(77)    YES(1) N9K-C93180YC-FX(1)      2(1)      0(1)      1(1)      1(1)



apic1# show switch
 ID    Pod   Address          In-Band IPv4     In-Band IPv6               OOB IPv4         OOB IPv6                   Version             Flags  Serial Number     Name
 ----  ----  ---------------  ---------------  -------------------------  ---------------  -------------------------  ------------------  -----  ----------------  ------------------
 1101  1     10.1.112.65      10.10.2.8        ::                         172.16.11.8      ::                         n9000-16.0(9d)      asiv   FDO23300S1F       Spine1101
 1201  1     10.1.112.64      10.10.2.5        ::                         172.16.11.5      ::                         n9000-16.0(9d)      aliv   FDO23340M24       Leaf1201
 1202  1     10.1.112.66      10.10.2.6        ::                         172.16.11.6      ::                         n9000-16.0(9d)      aliv   FDO23330XFS       Leaf1202

Flags - a:Active | l/s:Leaf/Spine | v:Valid Certificate | i:In-Service



Leaf1201# show isis dteps vrf overlay-1   #NOTE: Won't run on a simulator

IS-IS Dynamic Tunnel End Point (DTEP) database:
DTEP-Address       Role    Encapsulation   Type
10.1.112.65        SPINE   N/A             PHYSICAL
10.1.8.65          SPINE   N/A             PHYSICAL,PROXY-ACAST-MAC
10.1.8.67          SPINE   N/A             PHYSICAL,PROXY-ACAST-V4
10.1.8.66          SPINE   N/A             PHYSICAL,PROXY-ACAST-V6
10.1.8.64          LEAF    N/A             PHYSICAL
10.1.112.66        LEAF    N/A             PHYSICAL
  1. Additionally, I would like to understand what background processes and configuration tasks run automatically on the leaf and spine switches when they join the fabric — how does the fabric “self-configure” from the moment the APIC discovers and registers these devices?

As for the fabric configuring itself in the background - I've no idea how the simulator actually does it, but the way this happens in the real world is that each switch gets an IP Address via DHCP from APIC1. This process is well documented in a post I answered back in 2022


 

RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.

View solution in original post

Hi @shahi22 ,


Thank you so much for your detailed and insightful response — I really appreciate the time and effort you put into answering all of my questions!

Your appreciation is my reward, although I rarely answer a question without learning something new myself. Hopefully this post helps others too.

I just have a quick follow-up:

When I use SSH in the simulator, it takes me to the Linux shell of the leaf/spine switch, where it seems I can’t really do anything meaningful (no NX-OS CLI, no useful show commands).

Correct. 

But when using real ACI hardware, SSH from the APIC takes me directly to the NX-OS-style CLI like Leaf#, which is much more familiar and usable.

Correct again

so, my questions:
  1. Is there any real benefit to accessing the Linux shell on leaf/spine switches in the simulator?
    Or is it just there for internal/debug purposes and not useful for ACI tasks?

It is a little annoying that there are no show commands, but you can still use moquery and icurl to looks at things. Although I doubt there's anything you could see that you would see that couldn't be seen from a similar command on the APIC. 

In fact I've struggled to find a decent example of how the CLI on the Sim Leaf could be useful. But here goes (but honestly, it's not very useful):

If you look at a simple example like moquery -c bgpDom | grep dn - you'll get a list of BGP domains relevant to that particular leaf if executed on the leaf, whereas if executed on the APIC, you'll get a list of BGP domains on the whole system. E.g

# Executed on the SIM leaf with ID 1202 
admin@leaf2:~> moquery -c bgpDom | egrep "^dn\ " dn : sys/bgp/inst/dom-overlay-1 dn : sys/bgp/inst/dom-management dn : sys/bgp/inst/dom-mgmt:inb dn : sys/bgp/inst/dom-Tenant18:Production_VRF
admin@leaf2:~>
# I've just noticed that the prompt on the SIM says leaf2 whereas on a real system it would say leaf1202

# Executed on the SIM APIC apic1# moquery -c bgpDom | egrep "^dn\ " dn : topology/pod-1/node-1101/sys/bgp/inst/dom-overlay-1 dn : topology/pod-1/node-1101/sys/bgp/inst/dom-management dn : topology/pod-1/node-1101/sys/bgp/inst/dom-mgmt:inb dn : topology/pod-1/node-1201/sys/bgp/inst/dom-overlay-1 dn : topology/pod-1/node-1201/sys/bgp/inst/dom-mgmt:inb dn : topology/pod-1/node-1201/sys/bgp/inst/dom-management dn : topology/pod-1/node-1201/sys/bgp/inst/dom-common:SharedServices_VRF dn : topology/pod-1/node-1201/sys/bgp/inst/dom-Tenant18:Production_VRF dn : topology/pod-1/node-1202/sys/bgp/inst/dom-overlay-1 dn : topology/pod-1/node-1202/sys/bgp/inst/dom-management dn : topology/pod-1/node-1202/sys/bgp/inst/dom-mgmt:inb dn : topology/pod-1/node-1202/sys/bgp/inst/dom-Tenant18:Production_VRF
  1. What can I truly benefit from when using the APIC simulator?  


For me, the greatest benefit is to be able to test out scripting and explore configuration options in the GUI (especially if I don't have access to a real fabric). It also gives me a chance to explore things in newer versions before deploying them on a production system.  For instance, I'm using Sim v6.1(3f), whereas on our student lab networks we are using 6.0(9d) on on and 5.3(2b) on the other. While answering this question, I was able to test that our setup scripts still work on v6.1(3f)

You may not be able to test end-to-end connectivity on the Sim, but you can be reasonably sure that if you create a configuration on the Sim and it has errors, it will have errors on a physical system too.

 

 

 

RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.

View solution in original post

5 Replies 5

RedNectar
VIP Alumni
VIP Alumni

Hi @shahi22 ,

OK - a bunch of questions. Let's see if I can give you a bunch of answers.

My questions are:

  1. Why does the attach command open a Linux bash shell instead of the NX-OS CLI on the leaf switches in ACI mode?

First and foremost, remember that although this is a simulator, it is not a 100% representation of a live ACI environment. Some would argue that it should be called an emulator rather than a simulator, but in truth, it is a bit of both. Some parts are emulated to behave like ACI, some parts run exactly the same code as an APIC.

So when you first run the ACI Simulator, you start a VM that has multiple containers that do the various jobs of simulating an APIC, simulating a Spine switch, simulating two Leaf switches and possible other processes as well. 

Now when it comes to the base operating system for each device, you'll find it is a version of linux in every case, including the Nexus 9000 switches. But with real hardware, the bash shell is either replaced with something that looks like a traditional Cisco Nexus CLI (APIC), or modified to look like the traditional Cisco Nexus CLI (switches).  [Note: I'm pretty sure I've told you the correct story, but happy to be corrected if anyone else knows better]

But at the end of the day, you are running linux, so don't get too upset if you see a BASH shell rather than a CLI prompt. Remember that this simulated leaf switch is not running the FULL switch code - it can't because all the switch hardware is non-existent - so if in the modification of the code to run on the simulator, the NXOS CLI got lost, it's not a big deal.  It just means that you can't run any CLI commands on a simulated leaf, but the APIC simulation is not too bad.

BTW - don't use the attach command - use ssh instead. If you DO use attach you'll see a message saying that the command is being deprecated:

apic1# attach Leaf101
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
# Executing command: ssh Leaf101 -b 10.0.0.1
Warning: Permanently added 'leaf101' (RSA) to the list of known hosts.

admin@leaf101's password:
admin@leaf1:~>
  1. Is this behavior specific to the Simulator environment or does it apply to physical ACI leaf/spine switches as well?

No, when using real hardware the ssh command looks like this. Just for purity of comparison I've used attach in the example below on real hardware. You can see that it is different from the password prompt onwards.

apic1# attach Leaf1201
This command is being deprecated on APIC controller, please use NXOS-style equivalent command
# Executing command: ssh Leaf1201 -b 10.1.0.1
Housley Fabric#1 ACI Lab
(admin@leaf1201) Password:
Last login: Sat Jul 26 12:41:46 2025 from 10.1.0.1
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Copyright (c) 2002-2025, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
Leaf1201#
  1. Where and how can I view the VXLAN and other network configuration information that I would typically find in NX-OS?

Wow. This is digging really deep. And my first answer is

You don't need to view the VXLAN and other network configuration information that you would typically find in NX-OS - at least as far as the underlay is concerned.

and, honestly, that's probably where I should leave it.  If you understand how ACI actually works, you'll realise that you'll probably never need to know these details, so begin by spending an hour or two digesting this BRKACI-3101 Cisco Live presentation before continuing.

But for looking at tenant IP configurations and the like, you can continue using many of the show commands you are used to.  If I get time later I'll edit this post and add some examples.

[Later] Here's a couple of examples. Note that you can direct commands to leaf switches from the APIC without having to ssh to each leaf by using the fabric command.

Lets' start with some obvious ones:

apic1# show running-config
# Command: show running-config
# Time: Sat Jul 26 04:51:48 2025
aaa banner 'Application Policy Infrastructure Controller'

<snip a few hundred lines>

apic1# show running-config tenant Tenant01 # Show the config for a particular tenant - REAL HW only # Command: show running-config tenant Tenant01 # Time: Sat Jul 26 05:00:24 2025 tenant Tenant01 access-list AppServices_Fltr match raw TCP5000 dFromPort 5000 dToPort 5000 etherT ip prot 6 stateful yes exit access-list HTTPS_Fltr match tcp dest 443 exit <snip a few lines> apic1# show running-config leaf 2201 # Show the config for a particular leaf - REAL HW only # Command: show running-config leaf 2201 # Time: Sat Jul 26 05:07:58 2025 leaf 2201 template hsrp group-policy default tenant common exit vrf context tenant Tenant01 vrf Production_VRF l3out ProductionOSPF_L3Out router-id 10.201.0.201 route-map ProductionOSPF_L3Out_in scope global exit route-map ProductionOSPF_L3Out_out scope global match bridge-domain Web_BD exit exit route-map ProductionOSPF_L3Out_shared scope global ip prefix-list 10.201.10.0:24_L3EPG permit 10.201.10.0/24 match prefix-list 10.201.10.0:24_L3EPG contract consumer MgmtServices_Ct contract consumer WebServices_Ct exit exit exit

<snip many lines>

Now a couple of more useful troubleshooting commands - DO NOT expect these to work on the simulator because the simulator does not have the actual switching hardware

apic1# fabric 1201,1202 show ip route vrf  Tenant01:Production_VRF # This command tells the APIC to log into leaf 
# 1201 and 1202, then issue the command show ip route vrf Tenant01:Production_VRF on each leaf, which
# will list the routing table on each leaf for the VRF called Production_VRF for the tenant called Tenant01 ---------------------------------------------------------------- Node 1201 (Leaf1201) ---------------------------------------------------------------- IP Route Table for VRF "Tenant01:Production_VRF" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%' in via output denotes VRF 1.1.1.1/32, ubest/mbest: 1/0 *via 10.101.1.1, vlan74, [110/5], 23:27:16, ospf-default, intra 10.100.0.5/32, ubest/mbest: 1/0, attached, direct, pervasive *via 10.1.8.67%overlay-1, [1/0], 23:27:32, static, tag 4294967292, rwVnid: vxlan-2883585 10.101.0.201/32, ubest/mbest: 2/0, attached, direct *via 10.101.0.201, lo10, [0/0], 23:27:32, direct *via 10.101.0.201, lo10, [0/0], 23:27:32, local, local 10.101.1.0/24, ubest/mbest: 1/0, attached, direct *via 10.101.1.201, vlan74, [0/0], 23:27:32, direct 10.101.1.201/32, ubest/mbest: 1/0, attached *via 10.101.1.201, vlan74, [0/0], 23:27:33, local, local 10.101.10.0/24, ubest/mbest: 1/0 *via 10.101.1.1, vlan74, [110/44], 23:27:17, ospf-default, intra 10.101.11.0/24, ubest/mbest: 1/0, attached, direct, pervasive *via 10.1.8.67%overlay-1, [1/0], 21:26:01, static 10.101.11.1/32, ubest/mbest: 1/0, attached, pervasive *via 10.101.11.1, vlan43, [0/0], 2d00h, local, local 10.101.12.0/24, ubest/mbest: 1/0, attached, direct, pervasive *via 10.1.8.67%overlay-1, [1/0], 22:45:42, static 10.101.12.1/32, ubest/mbest: 1/0, attached, pervasive *via 10.101.12.1, vlan69, [0/0], 1d01h, local, local ---------------------------------------------------------------- Node 1202 (Leaf1202) ---------------------------------------------------------------- IP Route Table for VRF "Tenant01:Production_VRF" '*' denotes best ucast next-hop '**' denotes best mcast next-hop '[x/y]' denotes [preference/metric] '%' in via output denotes VRF 1.1.1.1/32, ubest/mbest: 1/0 *via 10.1.112.64%overlay-1, [200/5], 23:27:19, bgp-65001, internal, tag 65001 10.100.0.5/32, ubest/mbest: 1/0, attached, direct, pervasive *via 10.1.8.67%overlay-1, [1/0], 1d21h, static, rwVnid: vxlan-2883585 10.101.0.201/32, ubest/mbest: 1/0 *via 10.1.112.64%overlay-1, [1/0], 23:27:34, bgp-65001, internal, tag 65001 10.101.1.0/24, ubest/mbest: 1/0 *via 10.1.112.64%overlay-1, [200/0], 23:27:34, bgp-65001, internal, tag 65001 10.101.10.0/24, ubest/mbest: 1/0 *via 10.1.112.64%overlay-1, [200/44], 23:27:19, bgp-65001, internal, tag 65001 10.101.11.0/24, ubest/mbest: 1/0, attached, direct, pervasive *via 10.1.8.67%overlay-1, [1/0], 21:26:03, static 10.101.11.1/32, ubest/mbest: 1/0, attached, pervasive *via 10.101.11.1, vlan33, [0/0], 1d02h, local, local 10.101.12.0/24, ubest/mbest: 1/0, attached, direct, pervasive *via 10.1.8.67%overlay-1, [1/0], 22:45:43, static 10.101.12.1/32, ubest/mbest: 1/0, attached, pervasive *via 10.101.12.1, vlan11, [0/0], 2d00h, local, local


apic1# fabric 1201 show ip interface brief vrf Tenant01:Production_VRF # An old favourite: show ip interface brief, but
# only for leaf 1201 and for the Production_VRF in Tenant01 ---------------------------------------------------------------- Node 1201 (Leaf1201) ---------------------------------------------------------------- IP Interface Status for VRF "Tenant01:Production_VRF"(49) Interface Address Interface Status vlan43 10.101.11.1/24 protocol-up/link-up/admin-up vlan69 10.101.12.1/24 protocol-up/link-up/admin-up vlan74 10.101.1.201/24 protocol-up/link-up/admin-up lo10 10.101.0.201/32 protocol-up/link-up/admin-up

apic1# fabric 1201 show vlan extended | grep Tenant01 # The show vlan extended command doesn't have any parameters, so I've used grep to filter out just the relevant details
# Note how you can use this command to see the mappings of the internal VLANs to the user configured Bridge Domains
43 Tenant01:App_BD vxlan-16646017 Eth1/11, Eth1/31, Po3 44 Tenant01:2Tier_AP:AppServers_EPG vlan-1011 Eth1/11 67 Tenant01:2Tier_AP:AppServers_EPG vlan-1013 Eth1/11, Eth1/31, Po3 69 Tenant01:Web_BD vxlan-15859679 Eth1/31, Po3 70 Tenant01:2Tier_AP:WebServers_EPG vlan-1014 Eth1/31, Po3 74 Tenant01:Production_VRF:l3out- vxlan-15237059, Eth1/10

  1. What are the best methods or commands to inspect switch status and configurations in this bash shell

Here are a couple of commands that you might use. Take note of the prompt to determine if the command is issued at the APIC or at a leaf switch - and don't expect all these to work on the simulator.

apic1# avread
Cluster:
-------------------------------------------------------------------------
operSize                1
clusterSize             1
fabricDomainName        ACI Fabric1
version                 apic-6.0(9d)
discoveryMode           PERMISSIVE
drrMode                 OFF
kafkaMode               ON
autoUpgradeMode         OFF

APICs:
-------------------------------------------------------------------------
                    APIC 1
version           6.0(9d)
address           10.1.0.1
oobAddress        172.16.11.2/24
oobAddressV6      fc00::1/7
routableAddress   0.0.0.0
tepAddress        10.1.0.0/16
podId             1
chassisId         e9de02f6-.-313c72c6
cntrlSbst_serial  (APPROVED,WZP23290G96)
active            YES
flags             cra-
health            255

apic1# fnvread
        id               address  disabled    active  occupied permanent              model  nodeRole  nodeType  fabricId     podId
-----------------------------------------------------------------------------------------------------------------------------------------------
   1101(1)     10.1.112.65/32(1)     NO(1)    YES(0)  YES(178)    YES(1)      N9K-C9332C(1)      3(1)      0(1)      1(1)      1(1)
   1201(1)     10.1.112.64/32(1)     NO(1)    YES(0)   YES(77)    YES(1) N9K-C93180YC-FX(1)      2(1)      0(1)      1(1)      1(1)
   1202(1)     10.1.112.66/32(1)     NO(1)    YES(0)   YES(77)    YES(1) N9K-C93180YC-FX(1)      2(1)      0(1)      1(1)      1(1)



apic1# show switch
 ID    Pod   Address          In-Band IPv4     In-Band IPv6               OOB IPv4         OOB IPv6                   Version             Flags  Serial Number     Name
 ----  ----  ---------------  ---------------  -------------------------  ---------------  -------------------------  ------------------  -----  ----------------  ------------------
 1101  1     10.1.112.65      10.10.2.8        ::                         172.16.11.8      ::                         n9000-16.0(9d)      asiv   FDO23300S1F       Spine1101
 1201  1     10.1.112.64      10.10.2.5        ::                         172.16.11.5      ::                         n9000-16.0(9d)      aliv   FDO23340M24       Leaf1201
 1202  1     10.1.112.66      10.10.2.6        ::                         172.16.11.6      ::                         n9000-16.0(9d)      aliv   FDO23330XFS       Leaf1202

Flags - a:Active | l/s:Leaf/Spine | v:Valid Certificate | i:In-Service



Leaf1201# show isis dteps vrf overlay-1   #NOTE: Won't run on a simulator

IS-IS Dynamic Tunnel End Point (DTEP) database:
DTEP-Address       Role    Encapsulation   Type
10.1.112.65        SPINE   N/A             PHYSICAL
10.1.8.65          SPINE   N/A             PHYSICAL,PROXY-ACAST-MAC
10.1.8.67          SPINE   N/A             PHYSICAL,PROXY-ACAST-V4
10.1.8.66          SPINE   N/A             PHYSICAL,PROXY-ACAST-V6
10.1.8.64          LEAF    N/A             PHYSICAL
10.1.112.66        LEAF    N/A             PHYSICAL
  1. Additionally, I would like to understand what background processes and configuration tasks run automatically on the leaf and spine switches when they join the fabric — how does the fabric “self-configure” from the moment the APIC discovers and registers these devices?

As for the fabric configuring itself in the background - I've no idea how the simulator actually does it, but the way this happens in the real world is that each switch gets an IP Address via DHCP from APIC1. This process is well documented in a post I answered back in 2022


 

RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.

Hi again @RedNectar 

Thank you so much for your detailed and insightful response — I really appreciate the time and effort you put into answering all of my questions!

I just have a quick follow-up:

When I use SSH in the simulator, it takes me to the Linux shell of the leaf/spine switch, where it seems I can’t really do anything meaningful (no NX-OS CLI, no useful show commands).
But when using real ACI hardware, SSH from the APIC takes me directly to the NX-OS-style CLI like Leaf#, which is much more familiar and usable.

 

so, my questions:

  1. Is there any real benefit to accessing the Linux shell on leaf/spine switches in the simulator?
    Or is it just there for internal/debug purposes and not useful for ACI tasks?

  2. What can I truly benefit from when using the APIC simulator?  

Again, thank you @RedNectar  for your help and for sharing your knowledge with such clarity!

Best regards,
shahira

Hi @shahi22 ,


Thank you so much for your detailed and insightful response — I really appreciate the time and effort you put into answering all of my questions!

Your appreciation is my reward, although I rarely answer a question without learning something new myself. Hopefully this post helps others too.

I just have a quick follow-up:

When I use SSH in the simulator, it takes me to the Linux shell of the leaf/spine switch, where it seems I can’t really do anything meaningful (no NX-OS CLI, no useful show commands).

Correct. 

But when using real ACI hardware, SSH from the APIC takes me directly to the NX-OS-style CLI like Leaf#, which is much more familiar and usable.

Correct again

so, my questions:
  1. Is there any real benefit to accessing the Linux shell on leaf/spine switches in the simulator?
    Or is it just there for internal/debug purposes and not useful for ACI tasks?

It is a little annoying that there are no show commands, but you can still use moquery and icurl to looks at things. Although I doubt there's anything you could see that you would see that couldn't be seen from a similar command on the APIC. 

In fact I've struggled to find a decent example of how the CLI on the Sim Leaf could be useful. But here goes (but honestly, it's not very useful):

If you look at a simple example like moquery -c bgpDom | grep dn - you'll get a list of BGP domains relevant to that particular leaf if executed on the leaf, whereas if executed on the APIC, you'll get a list of BGP domains on the whole system. E.g

# Executed on the SIM leaf with ID 1202 
admin@leaf2:~> moquery -c bgpDom | egrep "^dn\ " dn : sys/bgp/inst/dom-overlay-1 dn : sys/bgp/inst/dom-management dn : sys/bgp/inst/dom-mgmt:inb dn : sys/bgp/inst/dom-Tenant18:Production_VRF
admin@leaf2:~>
# I've just noticed that the prompt on the SIM says leaf2 whereas on a real system it would say leaf1202

# Executed on the SIM APIC apic1# moquery -c bgpDom | egrep "^dn\ " dn : topology/pod-1/node-1101/sys/bgp/inst/dom-overlay-1 dn : topology/pod-1/node-1101/sys/bgp/inst/dom-management dn : topology/pod-1/node-1101/sys/bgp/inst/dom-mgmt:inb dn : topology/pod-1/node-1201/sys/bgp/inst/dom-overlay-1 dn : topology/pod-1/node-1201/sys/bgp/inst/dom-mgmt:inb dn : topology/pod-1/node-1201/sys/bgp/inst/dom-management dn : topology/pod-1/node-1201/sys/bgp/inst/dom-common:SharedServices_VRF dn : topology/pod-1/node-1201/sys/bgp/inst/dom-Tenant18:Production_VRF dn : topology/pod-1/node-1202/sys/bgp/inst/dom-overlay-1 dn : topology/pod-1/node-1202/sys/bgp/inst/dom-management dn : topology/pod-1/node-1202/sys/bgp/inst/dom-mgmt:inb dn : topology/pod-1/node-1202/sys/bgp/inst/dom-Tenant18:Production_VRF
  1. What can I truly benefit from when using the APIC simulator?  


For me, the greatest benefit is to be able to test out scripting and explore configuration options in the GUI (especially if I don't have access to a real fabric). It also gives me a chance to explore things in newer versions before deploying them on a production system.  For instance, I'm using Sim v6.1(3f), whereas on our student lab networks we are using 6.0(9d) on on and 5.3(2b) on the other. While answering this question, I was able to test that our setup scripts still work on v6.1(3f)

You may not be able to test end-to-end connectivity on the Sim, but you can be reasonably sure that if you create a configuration on the Sim and it has errors, it will have errors on a physical system too.

 

 

 

RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.

hi @RedNectar ^^.,

thanks a lot for sharing your experience — it's really helpful to hear from someone who's already worked with ACI across different versions.

I’ve actually just started learning ACI recently and I’m trying to get more hands-on by building some labs using the simulator.

Right now I’m focusing on understanding how to configure policies, tenants, and EPGs through the APIC.
I do realize that with the simulator, I don’t get to see actual packet forwarding or access the full control/data plane on the leaf/spine switches — so the max I can really do is apply policies and see how APIC manages the configuration logic.

Do you think that’s enough from a practical learning standpoint for someone just getting started and hoping to eventually work in this field?

Also, if you have any recommended resources or study paths, I’d really appreciate it.

 

Hi @shahi22 ,

The ACI Simulator is great for practicing configs, but just sooooo frustrating that you can't actually test your configs.

There are many situations where a real-world setup is needed. Testing if your contracts work is one example. ACI (neither Sim or real hardware) will tell you if you have applied a contract the wrong way around for example.

So for practice - at the risk of being censured for beating my own drum - why run run through my tutorials (that are in desperate need of updating for ACI v5.2+) which (from memory) were written using a simulator.

I also recently found one of the contributors to this site created some great content - including some videos, but can't find it now. I'll update this response if I remember it later.

One site that I do remember finding some useful content on was https://haystacknetworks.com/category/aci 

Cisco runs webinars - you may find some of these old recordings useful (not sure if that page is accessible to general public)

[Later] - Salman has some good free videos. He also has some paid content somewhere https://www.youtube.com/@salhiary

This communit site has a resources section for ACI https://community.cisco.com/t5/data-center-and-cloud-knowledge-base/cisco-aci-resources/ta-p/4315668

[Even Later] - There's a couple of helpful tutorial type posts on Tomislav Kranjec blog - they are a bit hard to find, but this page is a good starting point

 

RedNectar aka Chris Welsh.
Forum Tips: 1. Paste images inline - don't attach. 2. Always mark helpful and correct answers, it helps others find what they need.

Review Cisco Networking for a $25 gift card

Save 25% on Day-2 Operations Add-On License