on
01-15-2025
02:31 PM
- edited on
01-16-2025
01:30 PM
by
Tyler Langston
This article describes the steps to install a ThousandEyes Enterprise Agent (EA) in a Nexus device running NXOS through CLI. It will approach the installation methods using the Mgmt0 interface and ethernet ports, along with its routing requirements.
Before continuing this article, please ensure that the following pre-requisites are met:
As we begin, here are some high-level overviews of the two main tools involved: NXOS interfaces & the ThousandEyes Enterprise Agent.
There are two main ways to interface with an NXOS device for the purposes of this article: via the management interface or the ethernet ports. We’ll cover both processes in more detail, but here is a quick overview:
The management interface on Cisco NX-OS devices, or mgmt0 as it displays on the device, provides out-of-band management which enables you to manage the device by its IPv4 or IPv6 address.
The management virtual routing and forwarding (VRF) interface is for management purposes only. Only the mgmt0 interface can be in the management VRF. The mgmt 0 interface cannot be assigned to another VRF. No routing protocols can run in the management VRF (static only).
Layer2 and Layer3 interfaces are used in the Nexus to forward or route control and data plane traffic.
ThousandEyes' Enterprise Agents are global vantage points that are lightweight, Linux-based software agents that allow users to run a variety of layered monitoring tests, in order to gain insight into network and application performance, as well as user experience.
Enterprise Agents are intended to be installed in server-type environments and are designed to be online continuously in order to run scheduled tests. EAs require systems administrators to provide resources such as time synchronization and firewall/packet-filter rules for test and administrative traffic.
Each installation method this article covers requires continuous connection to the ThousandEyes cloud and the test target servers for agents.
For Agent installation with Nexus switches, the virtual network interface card (VNIC) connection only has two options: Gateway Bridge or Management.
Gateway Bridge: While using the Gateway Bridge, traffic will go through the ethernet ports to connect to the Internet.
Management: While using the Mgmt0 interface, it will use the management VRF to connect to the Internet.
To begin, you’ll need to install the ThousandEyes Enterprise Agent image to your device.
The TAR file can either be downloaded directly to the switch, or via a web browser from the ThousandEyes webapp and then transferred to the switch.
To download the TAR file, go to Cloud & Enterprise Agents > Agent Settings and click Add New Enterprise Agent. Select the Cisco App Hosting tab and then Nexus Switches.
Alternatively, you can use the following code snippet in the NXOS device (this article uses examples from version 4.4.4 and above):
N9K-01#copy https://downloads.thousandeyes.com/enterprise-agent/thousandeyes-enterprise-agent-<VERSION>.cisco.tar bootflash:
Enable the Cisco application hosting feature:
N9K-01(config)# feature app-hosting
Run the install command. Once it is installed, the status will be displayed as 'DEPLOYED'.
N9K-01(config)# app-hosting install appid TEA_Nexus package bootflash:thousandeyes-enterprise-agent-<VERSION>.cisco.tar
Installing package 'bootflash:/thousandeyes-enterprise-agent-<VERSION>.cisco.tar' for 'TEA_Nexus'. Use 'show app-hosting list' for progress.
TEA_Nexus installed successfully
Current state is DEPLOYED
N9K-01(config)# sh app-hosting list
App id State
---------------------------------------------------------
TEA_Nexus DEPLOYED
The next step will change depending on the interface you have decided to utilize (Mgmt0 or Gateway Bridge).
The following are the commands to run via the management interface in order to configure your ThousandEyes Agent.
N9K-01# sh ru app-hosting
!Command: show running-config app-hosting
!Running configuration last done at: Fri May 31 19:02:36 2024
!Time: Fri May 31 20:01:20 2024
version 10.3(4a) Bios:version 05.47
feature app-hosting
app-hosting appid TEA_Nexus !<<<<<<<<< Application installed
app-vnic management guest-interface 0
app-default-gateway X.X.X.X guest-interface 0 !<<<<<<<<< GW for the Management VRF or IP of Mgmt0
app-resource docker
prepend-pkg-opts
run-opts 1 "-e TEAGENT_ACCOUNT_TOKEN=XXXXXX" !<<<<<<<<< From the TE portal Navigate to Cloud & Enterprise Agents > Agent Settings > Enterprise Agents > Agents.Click Add New Enterprise Agent. To retrieve the account token.
run-opts 2 "--hostname N9KAgent"
Additional runtime options can be used to set the configuration options in the docker container. Read more on that here.
NX-OS allows application containers to share the network connections over the Cisco NX-OS management interface. So no other routing configurations are required!
The following are the commands to run via the Gateway Bridge in order to configure your ThousandEyes Agent.
N9K-02# sh run app-hosting
!Command: show running-config app-hosting
!Running configuration last done at: Sat Jun 1 00:32:04 2024
!Time: Sat Jun 1 00:33:42 2024
version 10.3(3) Bios:version 05.47
feature app-hosting
app-hosting bridge 1
ip address 10.0.0.29/30 !<<<<< This subnet should not exist in another interface within the Nexus
app-hosting appid N9K_B1 !<<<<<<<<< Application installed
app-vnic gateway bridge 1 guest-interface 0
guest-ipaddress 10.0.0.30/30
app-default-gateway 10.0.0.29 guest-interface 0 !<<<<< IP of bridge 1. Guest interface number must match the one of the bridge 1
app-resource docker
prepend-pkg-opts
run-opts 1 "-e TEAGENT_ACCOUNT_TOKEN=XXXXXXX" !<<<<<<<<< From the TE portal Navigate to Cloud & Enterprise Agents > Agent Settings > Enterprise Agents > Agents.Click Add New Enterprise Agent. To retrieve the account token.
run-opts 2 "--hostname N9K-B1" !<<<<<<<<< The hostname cannot contain special characters, valid characters are [0-9a-zA-Z-.]
name-server0 8.8.8.8 !<<<<<<<<< DNS server reachable from the Agent
Additional runtime options can be used to set the configuration options in the docker container. Read more here.
Note: Once Bridge 1 has an IP configured, its network will be installed in the routing table. Example of this below:
N9K-02(config)# sh run | i "app-hosting bridge" next 1
app-hosting bridge 1
ip address 10.0.0.29/30
N9K-02(config)# sh ip route
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
10.0.0.28/30, ubest/mbest: 1/0
*via Null0, [1/0], 00:01:40, appmgr
<SNIP>
As the docker container subnet is installed in the routing table, its subnet must not overlap with another L3 interface.
Beginning with Cisco NX-OS Release 10.3(2)F, the Bridge 1 subnet will be rejected if the IP is in use by either an interface or a virtual IP.
N9K-02(config-if)# sh run int vlan 200
!Command: show running-config interface Vlan200
!Running configuration last done at: Wed Jun 5 21:01:24 2024
!Time: Wed Jun 5 21:02:43 2024
version 10.3(3) Bios:version 05.47
interface Vlan200
no shutdown
ip address 200.0.0.20/24
N9K-02(config)# app-hosting bridge 1
N9K-02(config-app-hosting-bridge)# ip address 200.0.0.1/24
Invalid Bridge 1 v4 subnet.Unable to process bridge config
Notice that attempting to assign an IP to an interface that is already in use by Bridge 1 will also fail.
N9K-02# sh run app-hosting
!Command: show running-config app-hosting
!Running configuration last done at: Wed Jun 5 21:11:25 2024
!Time: Wed Jun 5 21:12:17 2024
version 10.3(3) Bios:version 05.47
feature app-hosting
app-hosting bridge 1
ip address 200.0.0.21/30
app-hosting appid N9K_B1
app-vnic gateway bridge 1 guest-interface 0
guest-ipaddress 10.0.0.30/30
app-default-gateway 10.0.0.29 guest-interface 0
app-resource docker
prepend-pkg-opts
run-opts 1 "-e TEAGENT_ACCOUNT_TOKEN=g5knmat2szcwi35ojvoh2i9sfucm8p48"
run-opts 2 "--hostname N9K-B1"
N9K-02# conf t ; int vlan 200
Enter configuration commands, one per line. End with CNTL/Z.
N9K-02(config-if)# ip address 200.0.0.21/24
% 200.0.0.21/24 overlaps with address configured as bridge 1 IP address
The below message is presented when the assigned IP is not a valid IP for the subnet:
N9K-02(config-app-hosting-bridge)# ip address 201.0.0.20/30
Invalid Bridge 1 v4 host id. Unable to process bridge config
A subnet with fewer IP addresses is best suited for the agent and bridge as it will consume fewer IPs available.
N9K-01# app-hosting activate appid TEA_Nexus
TEA_Nexus activated successfully
Current state is ACTIVATED
N9K-01# sh app-hosting list
App id State
---------------------------------------------------------
TEA_Nexus ACTIVATED
N9K-01# app-hosting start appid TEA_Nexus
TEA_Nexus started successfully
Current state is RUNNING
N9K-01# sh app-hosting list
App id State
---------------------------------------------------------
TEA_Nexus RUNNING
1. The hostname cannot contain special characters, valid characters are [0-9a-zA-Z-.]
1. DNS configuration is required on all ThousandEyes Agent deployments. However, If using the management interface, it is not possible to configure DNS within the agent.
N9K-01(config-app-hosting-appid)# name-server0 8.8.8.8
N9K-01(config-app-hosting-appid)# exit
N9K-01(config)# app-hosting activate appid TEA_Nexus
ERROR: Activate failed: Nameserver cannot be configured when using management interface
As a workaround, configure the DNS server directly on the Nexus device.
N9K-01(config)# ip dns source-interface mgmt 0
N9K-01(config)# ip name-server X.X.X.X
N9K-01(config)# ip domain-lookup
2. The guest interface private IP address gets automatically assigned by the Apphosting framework. The Agent’s IP address cannot be modified.
N9K-01(config-app-hosting-app-vnic)# guest-ipaddress 10.201.170.178/29
Invalid guest ipv4 address for app TEA_Nexus. Unable to process vnic config.
1. There is no option to add the app-hosting bridge into a routing process. Since the app-hosting bridge route source is appmgr, there is also no way to redistribute appmgr into a routing process. These are documented in this enhancement bug.
2. The agent’s next hop must have a route to its network. In our example case, the subnet 10.0.0.28/30. Example:
As a workaround to advertise the subnet in the network, there are 2 options:
On the host device, point the static route for a larger subnet that encompasses the container's subnet to Null0, then redistribute the static route.
Point static route from an adjacent device to a routable interface on the app-hosting switches for the app-hosting subnet, and redistribute/advertise from the adjacent device. An example of this in more detail can be found in the References section of this article (link here)
If you’re newer to NXOS devices, utilizing the Mgmt0 interface to understand and troubleshoot data can be a different experience. We’ve included a brief summary of how to do so below
In NXOS devices, the guest interface private IP address gets automatically assigned by the Apphosting framework. ThousandEyes EA assigns the IP 192.168.10.130 to the agent and 192.168.10.129 to its default gateway, these values cannot be changed. The next hop is configurable, it should be the default gateway of the Management VRF or the IP assigned to the Mgmt0 interface.
In the below example, the app-default-gateway is 10.122.187.117
N9K-116(config)# sh run app-hosting
!Command: show running-config app-hosting
!Running configuration last done at: Wed Nov 13 04:38:31 2024
!Time: Wed Nov 13 04:56:41 2024
version 10.3(6) Bios:version 05.51
feature app-hosting
app-hosting appid EA_NXOS116
app-vnic management guest-interface 0
app-default-gateway 10.122.187.117 guest-interface 0
app-resource docker
prepend-pkg-opts
run-opts 1 "-e TEAGENT_ACCOUNT_TOKEN=XXX"
run-opts 2 "--hostname N9K-116"
run-opts 3 "-e TEAGENT_PROXY_TYPE=STATIC"
run-opts 4 "-e TEAGENT_PROXY_LOCATION=proxy.X.X.com:80"
Which is the default route of the Management VRF
N9K-116# sh ip route vrf management
IP Route Table for VRF "management"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
0.0.0.0/0, ubest/mbest: 1/0
*via 10.122.187.117, [1/0], 00:28:27, static
10.122.187.112/28, ubest/mbest: 1/0, attached
*via 10.122.187.116, mgmt0, [0/0], 08:46:54, direct
10.122.187.116/32, ubest/mbest: 1/0, attached
*via 10.122.187.116, mgmt0, [0/0], 08:46:54, local
The traffic from the agent will follow this path, as visualized in the Path Visualization and the Traceroute Style Output:
Since the Agent’s IP is translated, the rest of the devices see its traffic as it would be from the host Mgmt0 IP. In other words, the default gateway of the Management VRF receives the agent traffic from the source IP 10.122.187.116 and not 192.168.10.130.
In a network test, a stream of up to 50 probe packets is sent by the agent and an equivalent number of response packets is expected to be received. The properties of this stream (response packet count and timing information) are then used for calculating connection loss, latency, and jitter.
When the test results report packet loss, it means that some of those 50 packets are getting lost. To narrow down if the packets are getting dropped by the agent on the Nexus, there are various tools available:
1. Packet captures.
a. Packet captures from the agent can be retrieved by the ThousandEyes Support team or using TCP Dump (details here)
b. Packet captures from the Nexus can be configured using Ethanalyzer
2. Access control list.
a. Configure an ACL with the statistics per-entry feature enabled, which specifies that the device maintains global statistics for packets that match the rules in the ACL.
3. Perform Nexus Health and Configuration Check (details here)
An example of troubleshooting packet loss using the mgmt0 interface can be found here (link)
The packet captures on the agent are filtered using the agent’s IP. While the captures taken on the Nexus are filtered using the Mgmt0 interface IP.
In the screenshots below, there are two types of ICMP packets. The first is the 50-packet stream with 50 ICMP echo request and 50 ICMP echo reply.
The second is ICMP Time-to-live exceeded. When performing path discovery, the testing agent sends sets of probe packets with a sequentially increasing Time to Live (TTL) value in the IP header. Each “node” (routing device) between the testing agent and the test target decrements the TTL by one. When a node receives packets with TTL set to 1 and decrements it to zero, the packet is discarded and the node responds to the packet sender with an ICMP type 11 (Time to Live Exceeded in Transit) message.
Once the test is run, the ACLs will capture how many packets transit the interface.
N9K-116# sh ip access-lists TE-ACL
IP access list TE-ACL
statistics per-entry
10 permit ip 10.24.212.9/32 10.122.187.116/32 [match=54]
20 permit ip any any [match=10642]
N9K-117# sh ip access-lists ACL-TE
IP access list ACL-TE
statistics per-entry
10 permit ip 10.122.187.116/32 10.24.212.9/32 [match=103]
20 permit ip any any [match=10510]
The host device, N9K-116, has a count of 54 matches. 50 for the ICMP replies, adding the ICMP TTL exceeded from the path visualization measurement.
This configuration where performed on a Nexus N9K-C93360YC-FX2 on version 10.4(3)
Below is a configuration example to route the ThousandEyes Agent in the network by redistribution on the adjacent device using EIGRP.
In this example, LEAF-18 is the host of the application. SW1 is the adjacent device performing the redistribution and LEAF-17 works as an example of the EIGRP neighbors.
App-hosting config:
LEAF-18(config)# sh run app-hosting
!Command: show running-config app-hosting
!Running configuration last done at: Sat Jan 4 07:30:28 2025
!Time: Sat Jan 4 07:41:24 2025
version 10.4(3) Bios:version 05.51
feature app-hosting
app-hosting bridge 1
ip address 10.5.0.9/30
app-hosting appid TEA_Nexus
app-vnic gateway bridge 1 guest-interface 0
guest-ipaddress 10.5.0.10/30
app-default-gateway 10.5.0.9 guest-interface 0
name-server0 8.8.8.8
app-resource docker
prepend-pkg-opts
run-opts 1 "-e TEAGENT_ACCOUNT_TOKEN=XXX"
run-opts 2 "--hostname N9K-B1"
The route to the application in LEAF-18 is type appmgr.
LEAF-18(config)# sh ip route
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
5.5.5.0/24, ubest/mbest: 1/0, attached
*via 5.5.5.10, Vlan5, [0/0], 00:29:57, direct
5.5.5.10/32, ubest/mbest: 1/0, attached
*via 5.5.5.10, Vlan5, [0/0], 00:29:57, local
10.5.0.8/30, ubest/mbest: 1/0
*via Null0, [1/0], 00:01:18, appmgr
The TE Agent can ping any IP within LEAF-18.
root@N9K-B1:/# te-ping 5.5.5.10
Could not connect to 5.5.5.10: Connection reset by peer.
--- 5.5.5.10 ping statistics ---
50 packets transmitted, 50 received, 0.0% packet loss
However, to reach anything further, the route must be present.
Since is not possible to advertise a route of type appmgr and use redistribution is not an option for this type of route. The workaround is to rely on the adjacent device.
LEAF-18(config-router-af)# redistribute ?
bgp Border Gateway Protocol (BGP)
direct Directly connected
eigrp Enhanced Interior Gateway Routing Protocol (EIGRP)
isis IS-IS Routing for IPv4
lisp LISP EID-prefixes
maximum-prefix Max number of prefixes redistributed
ospf Open Shortest Path First (OSPF)
rip Routing Information Protocol (RIP)
static Static routes
In this case, the next hop is SW1. SW1 has a static route to the EA configured using one IP of the host device as next-hop. Note: It has to be the IP, setting the VLAN interface won't work.
SW1(config)# ip route 10.5.0.8/30 5.5.5.10
LEAF-18# sh run int vlan 5
!Command: show running-config interface Vlan5
!Running configuration last done at: Sat Jan 4 06:43:07 2025
!Time: Sat Jan 4 06:43:18 2025
version 10.4(3) Bios:version 05.51
interface Vlan5
no shutdown
ip address 5.5.5.10/24
Now, SW1 can reach the agent.
SW1(config-if)# ping 10.5.0.10
PING 10.5.0.10 (10.5.0.10): 56 data bytes
64 bytes from 10.5.0.10: icmp_seq=0 ttl=62 time=0.686 ms
64 bytes from 10.5.0.10: icmp_seq=1 ttl=62 time=0.428 ms
64 bytes from 10.5.0.10: icmp_seq=2 ttl=62 time=0.355 ms
64 bytes from 10.5.0.10: icmp_seq=3 ttl=62 time=0.326 ms
64 bytes from 10.5.0.10: icmp_seq=4 ttl=62 time=0.324 ms
LEAF-17 and SW1 are EIGRP neigbors. SW1 is redistributing the static route to the agent in EIGRP
SW1# sh run eigrp
!Command: show running-config eigrp
!Running configuration last done at: Fri Jan 3 17:20:01 2025
!Time: Fri Jan 3 17:31:43 2025
version 10.3(4a) Bios:version 05.51
feature eigrp
router eigrp 5
address-family ipv4 unicast
autonomous-system 1
redistribute static route-map te-subnet
interface Vlan5
ip router eigrp 5
Now, LEAF-17 has a route to the agent through EIGRP
LEAF-17(config)# sh ip route
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
5.5.5.0/24, ubest/mbest: 1/0, attached
*via 5.5.5.15, Vlan5, [0/0], 00:38:06, direct
5.5.5.15/32, ubest/mbest: 1/0, attached
*via 5.5.5.15, Vlan5, [0/0], 00:38:06, local
10.5.0.8/30, ubest/mbest: 1/0
*via 5.5.5.5, Vlan5, [170/51456], 00:10:03, eigrp-5, external
And can reach the agent
PING 10.5.0.10 (10.5.0.10): 56 data bytes
64 bytes from 10.5.0.10: icmp_seq=0 ttl=62 time=0.705 ms
64 bytes from 10.5.0.10: icmp_seq=1 ttl=62 time=0.362 ms
64 bytes from 10.5.0.10: icmp_seq=2 ttl=62 time=0.341 ms
64 bytes from 10.5.0.10: icmp_seq=3 ttl=62 time=0.348 ms
64 bytes from 10.5.0.10: icmp_seq=4 ttl=62 time=0.327 ms
In the below diagram, the agent is hosted on the N9K-116, while its gateway is N9K-117. It has an Agent to Server test with target 10.24.212.9
To avoid confusion, this agent has only one test configured, so there is no need to filter the captures. Captures and ACLs can be configured to be more specific, including target or protocol, for example.
1. Packet capture example configuration on N9K-116:
ethanalyzer local interface mgmt limit-captured-frames 0 write bootflash:TE-Network-test.pcap
2. ACL configuration example in Mgmt0 interface. Since the Agent’s IP is translated, the traffic is sourced by the Mgmt0 IP.
N9K-116# sh ip access-lists TE-ACL
IP access list TE-ACL
statistics per-entry
10 permit ip 10.24.212.9/32 10.122.187.116/32 [match=0]
20 permit ip any any [match=395]
N9K-116# sh run int mgmt0
!Command: show running-config interface mgmt0
!Running configuration last done at: Wed Nov 13 04:16:31 2024
!Time: Wed Nov 13 04:18:48 2024
version 10.3(6) Bios:version 05.51
interface mgmt0
ip access-group TE-ACL in
vrf member management
ip address 10.122.187.116/28
Due to a limitation in the NXOS platform, outbound ACL on the Mgmt0 interface does not have the statistics per-entry. This is described in this known ticket.
As a workaround, we applied the ACL to the gateway.
N9K-116# sh ip route vrf management
IP Route Table for VRF "management"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
0.0.0.0/0, ubest/mbest: 1/0
*via 10.122.187.117, [1/0], 00:28:27, static
10.122.187.112/28, ubest/mbest: 1/0, attached
*via 10.122.187.116, mgmt0, [0/0], 08:46:54, direct
10.122.187.116/32, ubest/mbest: 1/0, attached
*via 10.122.187.116, mgmt0, [0/0], 08:46:54, local
N9K-117# sh ip access-lists ACL-TE
IP access list ACL-TE
statistics per-entry
10 permit ip 10.122.187.116/32 10.24.212.9/32 [match=0]
20 permit ip any any [match=665]
N9K-117# sh run int mgmt0
!Command: show running-config interface mgmt0
!Running configuration last done at: Wed Nov 13 04:02:59 2024
!Time: Wed Nov 13 04:19:14 2024
version 10.3(6) Bios:version 05.51
interface mgmt0
ip access-group ACL-TE in
vrf member management
ip address 10.122.187.117/28
3. Perform Nexus Health.
Verify if inband is stressed
show hardware internal buffer info pkt-stats cpu
show hardware internal cpu-mac inband stats
show hardware internal cpu-mac mgmt counters
show hardware internal cpu-mac inband counters
Verify CoPP status and check for incrementing CoPP drops in the hardware
show copp status
show policy-map interface control-plane | i i drop|violate|class
show hardware rate-limit
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: