Disciplined engineer with over 20 years of experience, highly trained, certified and motivated with a broad spectrum of proficiencies in all aspects of Enterprise Networking. Currently focusing on Data Center consolidation and virtualization for networks, storage fabrics, as well as compute. Possessing the industry's top certifications from Cisco, VCE, VMware, and NetApp I have consistently designed and implemented networks for converged Voice & Data and leading edge LAN/SAN Fabrics using Cisco Nexus 7k, 5k, 2k, 1000v, Cisco MDS Directors, and UCS B-Series and C-Series compute bleeding edge products along with the full Cisco portfolio of networking routers and switches.
Well versed from a pre-sales engineering capacity to post sales design and implementation in all relevant technologies and compatible product sets from many vendors. Looking to focus these skill sets in a productive pre-sales position where expertise in design and implementation of proper products, features, and technologies is appreciated. Currently holding a "Hybrid" role within CDW as both Field Solutions Architect in Pre-Sales and Principal Consulting Engineer post sales.
Current Specialties & Certifications:
- Cisco CCIE #6746 R/S (Active)
- Cisco Unified Computing (UCS) Design & Implementation
- Cisco Unified Computing (UCS) Technology Design & Support
- Cisco Unified Fabric Design & Implementation
- Cisco Storage Network Design & Implementation
- Cisco Data Center Networking Design & Implementation
- Cisco Data Center Storage Networking Design & Support
- Data Center Virtualization
- VCE/Vblock Architecture/Implementation
- NetApp Technologies & Architectures for DataONTAP 7, 8, and Clustered DataONTAP
- VMware Certified Professional - Data Center Virtualization (VCP-5)
- End to End QoS Policy design and implementation for VoIP, FCoE, and iSCSI
- UCS Director Data Center work flow automation & provisioning of virtualized environments
- Cisco CCIE Voice Written
- Cisco CCIE Datacenter Written & many more.
Disciplined engineer with over 20 years of experience, highly trained, certified and motivated with a broad spectrum of proficiencies in all aspects of Enterprise Networking. Currently focusing on Data Center consolidation and virtualization for networks, storage fabrics, as well as compute. Possessing the industry's top certifications from Cisco, VCE, VMware,
Trying to setup but having issues getting raid mirror. With Flexflash enabled in local disk policy, I can boot and see it no issues. To setup the raid side I understand that for some reason we are to create a scrub policy with flexflash scrub enabled. Is the process to apply this scrub to a service profile Disassociate the service profile in order to active the scrub policy Associate service profile and install hypervisor assuming raid mirror was created by scrub policy ?? Please clarify this portion for me please on customer site setting up with 2x16GB SD cards from Cisco installed in B200M3's with 2.2(2c) firmware installed. I can install and boot to them but not clear on raid process via a scrub policy. Dave
... View more
MUST we apply a qos system class since it can be set on the vNIC as well ?? Once I use a QoS system class then that means the CoS markings are also changing. I fear traffic leaving with say CoS 4 and jumbo mtu leaves the system properly but returns on a path where qos is written that UCS wrongly classifies the traffic as default where the mtu is only 1500 mtu. This is a valid issue is it not ?? I have customer with dell SAN where they used dedicated switches for iSCSI, no QoS, no markings and default queuing. Now adding a UCS to replace their old servers I need jumbo mtu but no CoS markings to ensure to/from marking remains the same. Make sense ? Dave
... View more
Rebooted N5548UP after software upgrade didn't go as planned and now it wont boot up stating mezz not found. Attempting to get console access to boot the kickstart and load the image again but looking for direction if that doesn't work and what the mezz not found points to ?? A software upgrade attempt from 5.1.3 to 5.2.1 broke fiber channel connectivity so a reboot to revert back to original 5.1.3 was what got us stuck with a broken reboot for FYI. dave
... View more
Upon rebooting N5548UP's the fc ports have gone away ?? slot 1 port 29-32 type fc still exists along all other config and license info. I rebooted 2nd switch and same thing, manually entered after the reboot slot 1 port 29-32 type fc and rebooteded and they returned. Was originally trying an upgrade in test/dev environment but need to research as to how/why the fc ports go away before doing the actual production environment. Went from 5.1.3.N2.1b to 5.2.1.N1.2 and then reverted back to 5.1.3 with no FC ports until I manually entered commands already in the running-config. Please assist, is this a bug ?? Never seen it here but customer wont upgrade until we can figure out how to avoid in production land. dave
... View more
Upgraded from 2.1(2b) and all was fine until ACTIVE FI rebooted, it appears to have never gone down as B is still subordinate, cannot force the cluster for him to take lead and cannot access UCSM AT ALL and having data plane outages with servers down. I never saw A drop after acknowledging and allowign the reboot. Having outage and cannnot get into FI, only into subordinate and it wont allow UCSM being subordinate and forcing lead wont work via CLI. d-
... View more
Appears the tail drops are now gone yet performance is still high ! Hate the fact that this is the fix but none the less just wanted it fixed. Thanks for the input. I went ahead and put back to layer 2 mode as well since no routing is needed but I will continue testing to see if tail drops return under certain configs or circumstances. QoS is disabled but still first xfer I've had yet with zero drops on it.
... View more
Investigating a slow transmission issue at a site with 3 switches. A 200 Series Switch and a C2960G hang off the SG300-10 which is currently running in layer 3 mode with qos enabled but not routing. Currently no VoIP and all classification and marking has been disabled. Doing a data transfer exceeding 50MBs I get approx 300 tail drops per each file transfer. For testing I am using a 2GB file and sending from PC directly connected to SG300 on vlan 1 to a server hanging off the other switches. So layer 2 switched, without any routing. If I send to a fast server where I exceed 50mbps the tail drops start rising fast. If I send to a much slower device at approx 1/3 the speed no tail drops occur. I have enabled qos, disabled qos, while enabled changed the queue values in each direction and in no way can I get the tail drops to go away yet the SG200-18 never drops a single packet, nor does the 2960G compact model, only this SG300. I was going to leave qos disabled since currenlty not using markings but it changes nothing. I have tried giving as much bandwidth to queue 1 as possible with no changes since default qos or "0" goes there. I am also curious in that the default QoS give the highest bandwidth by default to queue 4 the highest priority queue and the smallest to queue 1. this is backwards from what I am used to in the Enterprise IOS QoS VoIP world. Do these operate differently or are the values just reversed by default for some reason ?? Need help as word is to replace this switch today for not keeping up yet its statistics on the data sheet show it should far exceed whats needed and what other devices can do at 20Gbps bandwidth throughput and 14.88mpps. These are all 1500 byte frames, no jumbo, pretty generic, think of default switch with qos basic mode enabled, but then any tweaking and disabling of qos still sees packet tail drops between these devices at 50mbs.
... View more
Wont argue, taking time with customer I dont have to try and help you. Typically when the "Unable to install" message is up there its because the MS Installation CD is not currently mapped in the Virtual Media due to the Drivers CD being mapped. Map the drivers CD, install drivers for Storage->Cisco since we're using the VIC cards and booting from SAN. Once the fnic driver is installed I can see the LUN but it says cannot install to it. At this time I typically add a few more drivers to save time, like network for cisco, storage for LSI as well as chip set but you dont have to. With the error showing you go to virtual media, uncheck the map driver.iso file, check the box to map the MS installation cd/dvd again. Returning to the server installation the error still shows. At this time I now click refresh below the list of drives and once that refresh has completed the error is gone but a new one shows stating the LUN is too small and needs to be increased to 150GB. This size is because of page file required due to our memory amount, this will change based upon your memory for the blade, per microsofts documented amounts. However we still wanted 40GB for the bootable LUN so we clicked "Next" despite the error and it continues and starts the installation. This is our experiance, and at this customer site I have now done 22 servers this week, some 2008 and some are 2012 and all of them reacted the same, which is consistent with my past installs.
... View more
Not true, I have 2 B200 M3 blades in front of me, one with 128GB, other with 256MB. using 40GB LUN I get error, but clicking next it installs fine to 40GB LUN. Ensure when you unmount the drivers ISO, check the box to mount the WIN2012 Install CD/DVD ISO again, THEN CLICK REFRESH BEFORE moving on. This is when you should see LUN go from "unable to install" to it needs to be 150GB, ignore that and click next and it should continue to install. Same scenario was done here yesterday with 22 remaining blades and consistent across the board for both 2008 and 2012 r2 builds.
... View more
I am onsite at customer site this week while coaching their team in deploying both 2008 and 2012 servers bare metal on a handful of blades (B200 M3's) and we are running into this each time. Using EMC VNX for boot from SAN both OS's is throwing the error up requiring 150GB LUN to install or 147xxx to be specific. Each server getting exact same 147xxx also. I walked thru two at front of class room per the customer because of this and while I also saw it when presented a 40GB LUN I ignored the error by hitting "Next" and it still allowed the installation. They claim only one of all the others rejected the next button and actually required them to size the LUN up to 150GB, but I didn't see that and highly doubt it since its the ONLY one I didn't see, and all others did install altough the error was present. Not sure why, I installed the Chip set drivers, cisco network, cisco storage, and LSI storage during those installations just to be safe, yet I saw my LUN on all paths available after just installing the cisco storage fnic driver, with one active path. So not sure if MPIO plays into a part of that or not.
... View more
I have 2 data centers each with a pair of N7k's I just installed for a customer. they are in process of wanting to go layer 3 on their WAN as they add sites but currenlty have some layer 2 dependancies between the data center sites due to vmware and SAN replication as we all know and love. While they evaluate options I in the meantime have the new 7k's in place and having issues with a vlan spanning the dark fiber. So first dark fiber connecting 7 sites actually in a series of point to point links and vlan 50 is the subnet for the "WAN" we'll call it also they are close, local and literally LAN links just 7 different buildings each with a core. Two of the sites are data centers each with vlans 124-126 for servers, vmotion, and san replication. Today the two ten gig links leaving each Nexus 7k for their "ring" they call it instead of using layer 3 ports are actually trunks using vlan 50 for routed subnet with vlans 124-126 currenlty allowed on the trunk with 50. Site one has .1 hsrp running between .2 and .3 nexus router pair and other site has nothing but layer 2. Trouble is both sites have people using vlan 124 servers and some of them cannot connect so I want to validate using same IP's in each site only for vlans that are layer 2 across the fiber between data centers. Doc I read says can use same SVI IP's as well as FHRP gateway address so long as hsrp acl is in place blocking them. Concern there is putting on vlan 50 when it does hsrp locally across ISL between 7k's. I used seperate ISL for vlan 50 and vlan 3000 (local routing) due to being layer 3 and no vPC to metroE ring to alleviate using peer-link per documentation. Can I indeed duplicate safely the ip's of the hsrp and svi' in the 2nd data center to allow them to use .1 from there ?? I ask only cause in production and running now just some connection issues on that server vlan cause of this. Tried everything else so far and begging for help. thanks dave
... View more
Having poor performance on uplinks to single attached device much like I have seen on dual attached devices whent he load balancing algorithm is incorrect. Dropping pings 1-2 per every 5 sent and in attempt to tftp image every other packet is dropped. Seeing this on alot of uplinks so I put image on vss core and set it up as a tftp-server to upgrade directly connected switches and same thing, every other packet is dropped. Makes it slow but worse sometimes hard to login to it. Ports simply set as trunk mode very basic then tried some queuing and qos parameters to no avail. Seeing across several platforms and trying to get older 12.2.25 devices to 12.2.55 for 3560's and 3750's, 6500's running 15.01SY3 again on SUP2T's in VSS pair of 6513-E chassis. or is this pushing envelope for speed of 3560 on gig uplink to core ?? more concerned about vss setup with single attached devices but only call outs I recall are for split brain type scenarios, all is working and happening even when termination on the vss core. Traffic THRU device appears fine, terminating or originating traffic (mgmt) appears affected. 3560-5B03-5#sh int g0/1 GigabitEthernet0/1 is up, line protocol is up (connected) Hardware is Gigabit Ethernet, address is 0016.9da9.9381 (bia 0016.9da9.9381) MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive not set Full-duplex, 1000Mb/s, link type is auto, media type is 1000BaseSX SFP input flow-control is off, output flow-control is unsupported ARP type: ARPA, ARP Timeout 04:00:00 Last input 00:00:00, output 00:00:00, output hang never Last clearing of "show interface" counters never Input queue: 0/75/4/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 1899000 bits/sec, 547 packets/sec 5 minute output rate 14000 bits/sec, 16 packets/sec 1791918816 packets input, 2275851657 bytes, 0 no buffer Received 413589408 broadcasts (0 multicast) 0 runts, 0 giants, 0 throttles 10 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 1933970676 multicast, 0 pause input 0 input packets with dribble condition detected 372990403 packets output, 4077728015 bytes, 0 underruns 0 output errors, 0 collisions, 1 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 PAUSE output 0 output buffer failures, 0 output buffers swapped out
... View more
I replaced the universal image with my 12.2(55)SE8 non universal image or ip base in most instances with this customer and a few others. In both customer networks the high cpu issue immediately went away by applying the new image with no loss in features whatsoever. As stated above its a known issue and documented and its likely the links he posted will take you there to review them I'm thinking. I had done that but already had my image replaced and watched cpu fall and never go above 11% since both alone and in stacks with up to 7 switches thus far. Haven't had an issue since I replaced the images the very night I posted that about 2 months ago, I suggest this image and version which is very version used for 3850 IOS-XE images compiled for FCS as 15.x
... View more