cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
444
Views
15
Helpful
2
Replies

vManage Mgmt eth1

I am trying to SCP some files vshell to another server on my mgmt network. vManage has an IP of 172.16.31.76 and I am trying to scp to 172.16.31.75. 

 

I can ping vManage mgmt IP from gatway but in vshell I cannot. So I do a "ip link show" and it doesn't even list eth1?

vmanage:~$ ping 172.16.31.76
PING 172.16.31.76 (172.16.31.76) 56(84) bytes of data.
^C
--- 172.16.31.76 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms

vmanage:~$ ping 172.16.31.75
PING 172.16.31.75 (172.16.31.75) 56(84) bytes of data.
^C
--- 172.16.31.75 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2000ms

vmanage:~$ ping 172.16.31.1
PING 172.16.31.1 (172.16.31.1) 56(84) bytes of data.
^C
--- 172.16.31.1 ping statistics ---
11 packets transmitted, 0 received, 100% packet loss, time 10009ms

vmanage:~$ ping 172.16.31.1
PING 172.16.31.1 (172.16.31.1) 56(84) bytes of data.
^C
--- 172.16.31.1 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2000ms

vmanage:~$ ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 50:00:00:0a:00:00 brd ff:ff:ff:ff:ff:ff
4: gre0: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default
link/gre 0.0.0.0 brd 0.0.0.0
5: gretap0: <BROADCAST,MULTICAST> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
6: sit0: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default
link/sit 0.0.0.0 brd 0.0.0.0
7: tun_1_0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1024 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 500
link/none
8: tun_0_0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1024 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 500
link/none
10: veth0512: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether fe:0d:e3:d3:67:c1 brd ff:ff:ff:ff:ff:ff
11: system: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 500
link/none
vmanage:~$

2 Accepted Solutions

Accepted Solutions

Andrea Testino
Cisco Employee
Cisco Employee

Hi there!

This is all actually expected behavior (albeit annoying). vshell was limited to interfaces in VPN0 (transport) for these types of CLIs. Cisco employees (TAC/BU) have root access to the vshell that can bypass the 'limitation' but, unfortunately, not possible externally. Having that said, it seems that was overall an oversight in the implementation, so I'll file an enhancement internally to allow customers to ping sourced from different VPNs/interfaces in future releases.

Sample from a known-working vManage with the same behavior:

 

vManage# show interface | tab
                                       IF      IF      IF                                                                TCP
                  AF                   ADMIN   OPER    TRACKER  ENCAP                                     SPEED          MSS
VPN  INTERFACE    TYPE  IP ADDRESS     STATUS  STATUS  STATUS   TYPE   PORT TYPE  MTU  HWADDR             MBPS   DUPLEX  ADJUST  UPTIME       RX PACKETS  TX PACKETS
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
0    eth1         ipv4  10.0.2.224/24  Up      Up      -        null   transport  -    06:b1:c8:90:42:b2  1000   full    -       32:22:48:30  2311817918  1726984419
0    eth2         ipv4  10.0.3.222/24  Up      Up      -        null   service    -    06:c3:89:b3:92:44  1000   full    -       32:22:48:29  355589      355596
0    system       ipv4  1.1.1.5/32     Up      Up      -        null   loopback   -    -                  1000   full    -       32:22:50:27  0           0
0    docker0      ipv4  -              Down    Down    -        -      -          -    02:42:b5:db:23:a9  1000   full    -       -            -           -
0    cbr-vmanage  ipv4  -              Down    Up      -        -      -          -    02:42:d9:df:d5:67  1000   full    -       -            -           -
512  eth0         ipv4  10.0.1.226/24  Up      Up      -        null   mgmt       -    06:bf:58:9c:43:70  1000   full    -       32:22:48:30  3456356     3551815

 

I can ping from either eth0 or eth1 in their respective VPNs without issue:

 

vManage# ping vpn 512 10.0.1.1
Ping in VPN 512
PING 10.0.1.1 (10.0.1.1) 56(84) bytes of data.
64 bytes from 10.0.1.1: icmp_seq=1 ttl=64 time=0.031 ms
64 bytes from 10.0.1.1: icmp_seq=2 ttl=64 time=0.034 ms
^C
--- 10.0.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1030ms
rtt min/avg/max/mdev = 0.031/0.032/0.034/0.005 ms

vManage# ping vpn 0 10.0.2.1
Ping in VPN 0
PING 10.0.2.1 (10.0.2.1) 56(84) bytes of data.
64 bytes from 10.0.2.1: icmp_seq=1 ttl=64 time=0.032 ms
64 bytes from 10.0.2.1: icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from 10.0.2.1: icmp_seq=3 ttl=64 time=0.035 ms
^C
--- 10.0.2.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2060ms
rtt min/avg/max/mdev = 0.032/0.033/0.035/0.001 ms

 

However, notice once in vshell, I cannot ping sourcing from eth0 which in my case is the one in VPN512 (non-transport, management VPN):

 

vManage# vshell

vManage:~$ ping 10.0.2.1
PING 10.0.2.1 (10.0.2.1) 56(84) bytes of data.
64 bytes from 10.0.2.1: icmp_seq=1 ttl=64 time=0.041 ms
64 bytes from 10.0.2.1: icmp_seq=2 ttl=64 time=0.039 ms
64 bytes from 10.0.2.1: icmp_seq=3 ttl=64 time=0.032 ms
^C
--- 10.0.2.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2033ms
rtt min/avg/max/mdev = 0.032/0.037/0.041/0.006 ms

vManage:~$ ping 10.0.1.1
PING 10.0.1.1 (10.0.1.1) 56(84) bytes of data.
^C
--- 10.0.1.1 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4081ms

vManage:~$ ping 10.0.1.1 -c 5 -I eth0
ping: SO_BINDTODEVICE: Invalid argument 

 

Edit with Dan's reply, looks like they already worked around this:

vManage# request execute vpn 512 bash

vManage:~$ ping 10.0.1.1
PING 10.0.1.1 (10.0.1.1) 56(84) bytes of data.
64 bytes from 10.0.1.1: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 10.0.1.1: icmp_seq=2 ttl=64 time=0.039 ms
^C
--- 10.0.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.033/0.036/0.039/0.003 ms

Hope that helps!

 

- Andrea, CCIE #56739 R&S

View solution in original post

Dan Frey
Cisco Employee
Cisco Employee

is your interface in vpn512?  If so try opening vshell in the specific vpn with command: "request execute vpn 512 bash"

 

View solution in original post

2 Replies 2

Andrea Testino
Cisco Employee
Cisco Employee

Hi there!

This is all actually expected behavior (albeit annoying). vshell was limited to interfaces in VPN0 (transport) for these types of CLIs. Cisco employees (TAC/BU) have root access to the vshell that can bypass the 'limitation' but, unfortunately, not possible externally. Having that said, it seems that was overall an oversight in the implementation, so I'll file an enhancement internally to allow customers to ping sourced from different VPNs/interfaces in future releases.

Sample from a known-working vManage with the same behavior:

 

vManage# show interface | tab
                                       IF      IF      IF                                                                TCP
                  AF                   ADMIN   OPER    TRACKER  ENCAP                                     SPEED          MSS
VPN  INTERFACE    TYPE  IP ADDRESS     STATUS  STATUS  STATUS   TYPE   PORT TYPE  MTU  HWADDR             MBPS   DUPLEX  ADJUST  UPTIME       RX PACKETS  TX PACKETS
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
0    eth1         ipv4  10.0.2.224/24  Up      Up      -        null   transport  -    06:b1:c8:90:42:b2  1000   full    -       32:22:48:30  2311817918  1726984419
0    eth2         ipv4  10.0.3.222/24  Up      Up      -        null   service    -    06:c3:89:b3:92:44  1000   full    -       32:22:48:29  355589      355596
0    system       ipv4  1.1.1.5/32     Up      Up      -        null   loopback   -    -                  1000   full    -       32:22:50:27  0           0
0    docker0      ipv4  -              Down    Down    -        -      -          -    02:42:b5:db:23:a9  1000   full    -       -            -           -
0    cbr-vmanage  ipv4  -              Down    Up      -        -      -          -    02:42:d9:df:d5:67  1000   full    -       -            -           -
512  eth0         ipv4  10.0.1.226/24  Up      Up      -        null   mgmt       -    06:bf:58:9c:43:70  1000   full    -       32:22:48:30  3456356     3551815

 

I can ping from either eth0 or eth1 in their respective VPNs without issue:

 

vManage# ping vpn 512 10.0.1.1
Ping in VPN 512
PING 10.0.1.1 (10.0.1.1) 56(84) bytes of data.
64 bytes from 10.0.1.1: icmp_seq=1 ttl=64 time=0.031 ms
64 bytes from 10.0.1.1: icmp_seq=2 ttl=64 time=0.034 ms
^C
--- 10.0.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1030ms
rtt min/avg/max/mdev = 0.031/0.032/0.034/0.005 ms

vManage# ping vpn 0 10.0.2.1
Ping in VPN 0
PING 10.0.2.1 (10.0.2.1) 56(84) bytes of data.
64 bytes from 10.0.2.1: icmp_seq=1 ttl=64 time=0.032 ms
64 bytes from 10.0.2.1: icmp_seq=2 ttl=64 time=0.032 ms
64 bytes from 10.0.2.1: icmp_seq=3 ttl=64 time=0.035 ms
^C
--- 10.0.2.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2060ms
rtt min/avg/max/mdev = 0.032/0.033/0.035/0.001 ms

 

However, notice once in vshell, I cannot ping sourcing from eth0 which in my case is the one in VPN512 (non-transport, management VPN):

 

vManage# vshell

vManage:~$ ping 10.0.2.1
PING 10.0.2.1 (10.0.2.1) 56(84) bytes of data.
64 bytes from 10.0.2.1: icmp_seq=1 ttl=64 time=0.041 ms
64 bytes from 10.0.2.1: icmp_seq=2 ttl=64 time=0.039 ms
64 bytes from 10.0.2.1: icmp_seq=3 ttl=64 time=0.032 ms
^C
--- 10.0.2.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2033ms
rtt min/avg/max/mdev = 0.032/0.037/0.041/0.006 ms

vManage:~$ ping 10.0.1.1
PING 10.0.1.1 (10.0.1.1) 56(84) bytes of data.
^C
--- 10.0.1.1 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4081ms

vManage:~$ ping 10.0.1.1 -c 5 -I eth0
ping: SO_BINDTODEVICE: Invalid argument 

 

Edit with Dan's reply, looks like they already worked around this:

vManage# request execute vpn 512 bash

vManage:~$ ping 10.0.1.1
PING 10.0.1.1 (10.0.1.1) 56(84) bytes of data.
64 bytes from 10.0.1.1: icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from 10.0.1.1: icmp_seq=2 ttl=64 time=0.039 ms
^C
--- 10.0.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.033/0.036/0.039/0.003 ms

Hope that helps!

 

- Andrea, CCIE #56739 R&S

Dan Frey
Cisco Employee
Cisco Employee

is your interface in vpn512?  If so try opening vshell in the specific vpn with command: "request execute vpn 512 bash"