Showing results for 
Search instead for 
Did you mean: 

The CiscoLive! US 2012 Network : What We Made Possible (Part 1)

Hall of Fame Cisco Employee

Screen Shot 2012-06-18 at 7.40.19 PM.png


CiscoLive! US 2012 just completed, and man am I tired.  Not only did I do my usual network setup and management and speaking gigs, but I flew from San Diego back home for a day then to Paris to present at a conference out here. I can't wait for the time off to begin .

I wanted to share a bit about how things came together for the CiscoLive! US network, our triumphs, issues, and lessons learned.  Unlike previous blogs, I'll break this one up to keep your eyes from crossing.  Again, forgive me if I miss any details.

Things all started around March 13 when the team had our kick-off call.  Many of the same players were returning from CiscoLive! Las Vegas.  Unlike that show, this one promised to be less stressful.  We were going to have earlier access to the San Diego Convention Center than we did with the Mandalay Bay.  This meant that we'd have the gear in place before we showed up.  We were also going to be getting a whole slew of modern 3560-E series switches for the intermediate  distribution areas and new 3560CG switches from the rooms.  This was good as we would be able to standardize on software and configs.  And since I was the guy that would pre-stage the access layer, that meant less work .

The network was pre-staged in building 17 of the Cisco San Jose, California campus.  This was great for those that happened to be in SJ.  It wasn't so great for Jason Davis and I who are in Research Triangle Park, North Carolina.  Not to worry, the network core team setup a GRE tunnel from our Sunnyvale colo to building 17...only, they forgot to really clear things with Cisco IT.  This led to some, shall we say, downtime.


Once the access issues issues were sorted out, Jason got to work configuring our UCS B-series based data center.  We had two B5108 chassis connected with Nexus 5548s back to our 6509-E Sup2T VSS core.  Our partner NetApp provided the storage for our DCN, thus making this a FlexPod configuration.  With all that compute power, Jason only staged two of the blades.  We'd later bring others online, but even with all the apps we loaded on two of them during staging, the CPU barely cracked 2%.

Once Jason got ESXi and vCenter going on the UCS blades, I loaded Cisco Prime LMS 4.2 in order to do the configuration and upgrades on all of the access and IDF switches.  LMS 4.2 made short work of this.  I used the venerable SWIM to deploy 15.0(1)SE2 to the 3560-E switches and 12.2(55)EX3 to the 3560CG switches.  There's nothing like rolling out code to 40 switches at a time to make you feel like you should be doing other things .

To get the switches running on my standard configuration, I create a few user-defined tasks in Netconfig.  I created one task for each of the main distribution centers in the show (Convention Center, Hilton, Marriott, and WoS) as well as a task for EEM and one for Smart Call Home (more on those later).  The reason we needed a tasks for each distribution center was that we used different management VLANs in each.  Layer 2 stopped at the distribution.  Here's a picture of the overall topology to give you a better idea of how the network architecture looked.


Based on what I had learned from troubleshooting access issues last year, it was helpful to know what devices were connected to what ports.  While it's easy to create meaningful port descriptions at the main distribution and core layers of the network, it's a bit harder at the access layer where things tend to move around (especially at an event like CiscoLive!).  Therefore, I used a bit of embedded automation to automatically set port descriptions based on the CDP neighbor that connected to the port last.  I created a separate Netconfig task to deploy this during pre-stage.

Finally, I wanted to show off some of the capabilities we provide in our Smart Services offerings.  I used LMS to give me a report of all of the switch serial numbers at the IDF and access layers then I had those switches registered with Smart Call Home.  This way, I could be proactively notified if anything strange was happening with the switches (e.g., hardware issues or severe events) during the show.  Since I had a somewhat specialized configuration for SCH (due to show security), I created a custom user-defined Netconfig task to deploy SCH onto the switches.


For your reference, I have attached all of the Netconfig task details to this post.  Enjoy!

After all of the pre-staging had been completed, the network was ready to ship from San Jose to San Diego.  We shut everything down (some things more quickly than others), then packed up the equipment on May 25.  Next stop, San Diego.

More to come in the next installment:

Cisco Employee

Awesome! Waiting for next installment!!

Frequent Contributor

Vinod,  I hope you received notification that Part 2 and 3 have been posted!

Content for Community-Ad