cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
17314
Views
15
Helpful
20
Replies

First time Sandbox user - having issues accessing the switches

hikerguy
Level 1
Level 1

This whole environment is all new to me, so it's probably something simple I'm not doing.

I selected the "Always-on" option for the sandbox named "Open NX-OS with Nexus 9Kv". I received both emails and installed AnyConnect.

I connected to the VPN using Anyconnect.

I read through all the instructions and have all the management IPs and passwords.

I logged into my sandbox via the link provided in my email.

I can start the Linux box (CentOS). I "thought" I'd be able to ping the switches from the Linux box and log into them from there, but none respond. 

So, how exactly do I access these switches???? I thought I'd see four icons to represent the switches, but I don't see that either. I'm assuming now I have to ssh from the Centos box, but since I can't ping any of the four switches from there, I'm at a loss. 

In addition, I can access the Linux box from my PC using SecureCRT, but I still can't ping any of the switches. Something tells me I need to power these on, but I see no way to do that. 

 

Any help is appreciated.

Thanks,

Andy

 

Here's what I see on my sandbox:

 

Capture.JPG

 

20 Replies 20

gordan
Level 1
Level 1

Hi Andy

This video on you-tube might be of assistance to you.

CCNA VIRL Labs - VIRL Server from DevNet

https://www.youtube.com/watch?v=761glRrJhX4

For first time sandbox user - this set of instructions might be worth watching.

 

Thanks for the link Gordan. That helped out a lot. Now I'm having issues getting the devices to load. I click on the Launch new simulation button and part of the page comes up, but it also keeps showing "Loading nodes....". I left it that way for 7 minutes with no change. I've tried using FF, IE, and Chrome and have cleared cache in all three browsers with no change. I also rebooted my PC with no change.

 

A couple of days ago I got it to load the nodes and was able to drag icons into the work area, but that was a bit clunky as well. Sometimes it would let me drag and drop an icon with no problem. Other times I'd click and drag and see a "+" symbol next to the icon and I couldn't drop it on the workspace.

 

From the little I've done so far (about 5 attempts), I was successful building a lab just one time. I'm running Windows 10 with 12Gb of memory.

 

Anyone have any ideas why it's not loading the icons? Is the paid version of VIRL this problematic as well?

 

Capture.JPG

You should be able to reach the switches on their mgmt IP address from your local machine and the centos/devbox in the sandbox. They do take a while to load up btw. This sandbox is build using virlutils - https://github.com/CiscoDevNet/virlutils when you are on the centos/devbox you can issue the virl commands to see the status, from the centos/devbox you can also use the console access.

 

HTH

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

bigevilbeard , thanks for replying. I'm making some headways but still have some issues. I have a topology with three Nexus switches. I have them all connected as shown. I'm using SecureCRT and can ssh to what I'll call the jump server (172.16.30.106). From there, I can only ping nx-osv-2 (and ssh into it). I can't figure your why osv-1 and osv-2 show unreachable (and I verified this by trying to ping them from the jump server). Any idea why I can only reach one of the three switches?

 

Capture.JPG

 

Capture.JPG

Thanks for the update - using virlutil - destroy the topology and relaunch it (you can do this with virl down) or you can also stop/start the switches on their own, they get stuck in booting sometimes - the best way to check this is virl console btw.

 

HTH  

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

I reserved the NXOS virl lab just now, here is the outputs, i ssh'd into the devbox/jump host, enabled the venv and moved to the following directory - from there, i issues the virl up command and the network came up

 

(venv) [developer@devbox sbx_nxos]$pwd
/home/developer/code/sbx_nxos

 

(venv) [developer@devbox sbx_nxos]$virl nodes
Here is a list of all the running nodes
╒══════════════╤═════════════╤═════════╤═════════════╤════════════╤══════════════════════╤════════════════════╕
│ Node         │ Type        │ State   │ Reachable   │ Protocol   │ Management Address   │ External Address   │
╞══════════════╪═════════════╪═════════╪═════════════╪════════════╪══════════════════════╪════════════════════╡
│ nx-osv9000-1 │ NX-OSv 9000 │ ACTIVE  │ UNREACHABLE │ telnet     │ 172.16.30.101        │ N/A                │
├──────────────┼─────────────┼─────────┼─────────────┼────────────┼──────────────────────┼────────────────────┤
│ nx-osv9000-2 │ NX-OSv 9000 │ ACTIVE  │ REACHABLE   │ telnet     │ 172.16.30.102        │ N/A                │
├──────────────┼─────────────┼─────────┼─────────────┼────────────┼──────────────────────┼────────────────────┤
│ nx-osv9000-3 │ NX-OSv 9000 │ ACTIVE  │ REACHABLE   │ telnet     │ 172.16.30.103        │ N/A                │
├──────────────┼─────────────┼─────────┼─────────────┼────────────┼──────────────────────┼────────────────────┤
│ nx-osv9000-4 │ NX-OSv 9000 │ ACTIVE  │ REACHABLE   │ telnet     │ 172.16.30.104        │ N/A                │
╘══════════════╧═════════════╧═════════╧═════════════╧════════════╧══════════════════════╧════════════════════╛

I can ping the devices and ssh to them ok

 

(venv) [developer@devbox sbx_nxos]$ping 172.16.30.102
PING 172.16.30.102 (172.16.30.102) 56(84) bytes of data.
64 bytes from 172.16.30.102: icmp_seq=1 ttl=254 time=1.71 ms
64 bytes from 172.16.30.102: icmp_seq=2 ttl=254 time=2.10 ms
^C
--- 172.16.30.102 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.710/1.905/2.101/0.200 ms
(venv) [developer@devbox sbx_nxos]$virl ssh  nx-osv9000-2
Attemping ssh connectionto nx-osv9000-2 at 172.16.30.102
Warning: Permanently added '172.16.30.102' (RSA) to the list of known hosts.
User Access Verification
Password:

Cisco NX-OS Software
Copyright (c) 2002-2018, Cisco Systems, Inc. All rights reserved.
[removed]
http://www.gnu.org/licenses/lgpl.html
***************************************************************************
*  Nexus 9000v is strictly limited to use for evaluation, demonstration   *
*  and NX-OS education. Any use or disclosure, in whole or in part of     *
*  the Nexus 9000v Software or Documentation to any third party for any   *
*  purposes is expressly prohibited except as otherwise authorized by     *
*  Cisco in writing.                                                      *
***************************************************************************
nx-osv9000-2#

I noticed in your last post you only have three switches now? This does not look like the open NX-OS lab on virl, which sandbox is this now please?

 

Thanks!

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

bigevilbeard, thanks for getting back to me. My earlier sandbox reservation expired. I started a new session, with some new issues:

I can no longer log into the VIRL server from SecureCRT, but I can ping it from a Windows cmd prompt, and I can access the server by opening a webpage and logging in using guest/guest. When I use those same credentials in SecureCRT, it's no accepting them. 

 

What do you mean when you said "destroy the topology and relaunch it..."?

How do I access the virl console?

I watched this video by Hank Preston (with David Bombal) that had some helpful info, but I'm still a bit lost. This is all new to me (which is probably obvious by now lol)

https://www.youtube.com/watch?v=S0jfZLobFdU

You said you "enabled the venv". What does that mean, what is it's purpose and how do I enable this? Do I need to be concerned with that if I'm doing everything from the web interface and SecureCRT?

Why did you have to move to the following directory?

/home/developer/code/sbx_nxos

 

When I go to my lab, the lab does still show Active:

 

I'm also seeing the following error message when trying to bring my topology up (I executed the command [developer@devbox ~]$use AndyN9K first before trying to bring up the sim:

 

Capture.JPG

 

 

Capture.JPG

 

Capture.JPG

 

Capture.JPG

I posted earlier but now I see my post is missing (along with the pics I included). My sim/reservation is expired now, but here's what I had posted:

 

bigevilbeard, thanks for getting back to me. My sandbox reservation expired, but I started a new one and have some more questions:

How do I access the virl console?

I watched this video by Hank Preston (with David Bombal) that had some helpful info, but I'm still a bit lost. This is all new to me (which is probably obvious by now lol)

VIRL tutorial

You said you "enabled the venv". What does that mean, what is it's purpose and how do I enable this?

Why did you have to move to the following directory?

/home/developer/code/sbx_nxos

When I executed the command virl ls --all, I did see my sim running and saw the nodes when I ran virl nodes. However, even after issuing virl use AndyNK9 (the name of my lab), then virl down, virl up, the status didn't change. During this time, my lab did show active in the GUI. Why didn't bouncing the lab bring my nodes up?

Keep in mind I'm doing everything from the website or using SecureCRT (after doing all this, I did end up installing Python 3.7 and virlutils locally, but I don't plan on running virl from my PC. I want to run it all in the Cloud).

 

Thanks,

 

Andy 

No problem let me reply inline to each question

 

How do I access the virl console?

You can do this from the devbox/jumphost in sandbox with: 

virl console [device name]

You said you "enabled the venv". What does that mean, what is it's purpose and how do I enable this?

 

Virtualenv is a tool that lets you create an isolated Python environment for your project. It creates an environment that has its own installation directories, that doesn’t share dependencies with other virtualenv environments (and optionally doesn’t access the globally installed dependencies either). You can even configure what version of Python you want to use for each individual environment. It's very much recommended to use virtualenv when dealing with Python applications.

 

Why did you have to move to the following directory?

 

To bring up (with virl up command) the topology of Nexus switches (which is a topology.virl) this is with that directory. You do not have to do this btw - Once you find an interesting topology, you can either pull the topology into your current environment or launch it directly pull topology to local directory (as topology.virl)

 

Why didn't bouncing the lab bring my nodes up?

 

Maybe because of the above reason, you need to be in the directory of the topology.virl file. When you ran the `virl ls --all` you are looking in all directory's as the `--all` will do that. Think of adding the `--all` as a global search function.

 

From what i can see though - you have built your own topology, this is fine - but you might want to try the built in one to get familiar with how this works or your are troubleshooting learning bringing up new topology's as well as mastering how this works.

 

Hope this helps, let me know otherwise

 

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

I think I'm missing something small yet significant. I'm running into the same issue (and actually took a step back tonight).

I created a basic topology with three routers. I never could figure out how to go back to the UWM page and display the nodes, so I resorted to a method that's worked consistently. That method is building the topology, downloading it, then re-launching that topolgy from the UWM page. When I do that, I see the node status:

Capture.JPG

But, as you can see, the nodes still show unreachable. And here's what I see from the jump server. You can see I'm in the directory with the topology.virl file and see my lab active and the nodes show Active, but once again, they're unreachable:

Capture.JPG

I am not running VIRL from my PC. I'm doing all of this strictly from the sandbox. The only thing I'm doing from my PC is connecting to my lab using AnyConnect and connecting the the virl server and DevBox/CentOS box using SecureCRT.

am able to access each node via the console port. But why can't I access them using the IPs above, or the IPs provided on the main sandbox page (below)? I can't ping any of the four IPs above (or four IPs below) from the VIRL server or jump server. Where do these 8 IPs come from and how am I supposed to be able to reach these?

  • Switch management IPs:
  • nx-osv9000-1: 172.16.30.101
  • nx-osv9000-2: 172.16.30.102
  • nx-osv9000-3: 172.16.30.103
  • nx-osv9000-4: 172.16.30.104

Also, despite the info on the instructions tab (on the lab page) saying the login is cisco/cisco, I have to use admin/admin when logging in to the console port.

I hope you can give me some tips to get this going and make the process smoother.This will be a great tool once I work out the kinks. 

Thanks,

Andy

 

 

Hey Andy,

 

Seeing unreachable, is a red herring, this was a known bug in older versions (one i saw often on this lab), i am not sure what causes the test to fail.

 

The IP address are created dynamically (unless other wise configured), you will see on your virl file 

 

{{ gateway }} - will be replaced with the default gateway of the flat network

More details here --> https://developer.cisco.com/codeexchange/github/repo/CiscoDevNet/virlutils/

 

Some ideas/questions

 

  • Via the virl console - check the running configuration for the device IP address and gateway are correct
  • For usernames is this your topology or the four node Nexus one please, check the configuration, as you know console can have a different username/password for console access (you are correct ssh should be cisco/cisco)

One issue we have here, i cannot see your virl topology which makes troubleshooting hard as we could be looking at unknown issues, if we can stick to the four node Nexus one - i can check this with you and we can ensure the instructions, build is correct. If you want to keep importing you own devices use this sandbox please as it runs newer versions of the Nexus code --> https://devnetsandbox.cisco.com/RM/Diagram/Index/6b023525-4e7f-4755-81ae-05ac500d464a?diagramType=Topology

 

Thanks!

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

bigevilbeard,  I appreciate all the help you've provided so far. Let me go at this a different way. When I reserve a lab (in this case, the Data Center lab with Nexus switches) and can log into the Devbox/jumphost, shouldn't I be able to ping any of the management IPs or loopback IPs from the jump host (as long as my lab shows Active, which it does)? This is the problem I'm having now. I have my VPN running obviously since I'm in the DevBox (from my PC). But, I don't see any active simulations when I run "virl ls --all" (except ~jumphost).   I'm following the instructions in the lab, but I'm getting nowhere.

 

 

Tom, could you share the link to the sandbox you are reserving please, let me see if I can replicate and as we have a few Nexus based sandbox want to ensure I am checking the same one you are. 

 

Thanks!

Please mark this as helpful or solution accepted to help others
Connect with me https://bigevilbeard.github.io

Here ya go....

 

https://devnetsandbox.cisco.com/RM/Diagram/Index/468dd5e4-83d8-4b7a-9bd6-6f58b1d8246a

 

I just have this gut feeling there's one simple step I'm missing.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: