Thursday, January 23, 2014

vCenter and Nexus 1000V Extension Key Mismatch

More troubleshooting notes-after I removed the VEMs, and then re-installed the VSM's, I encountered another error-the vCenter extension key for the nexus dvs mismatched with the VSM extension key:


I appeared to then have an orphaned DVS switch.  I resolved this by following a series of steps to match up the VSM and vCenter extension keys.  Note that when you perform these steps, and reconnect the VSM to the dvs switch, the VSM will sync its port profile configuration with the dvs.  If you have the running config backed up, now would be a good time to load that config into the VSM.  If its a fresh VSM, you'll obviously lose all of the port profiles.


  1. SSH into Nexus 1000V.  Issue a show svs connections to get the svs connection name to the vCenter server.
  2. Enter config mode.  Type in svs connections [vcenter connection name]
  3. Type in no connect
  4. Exit back into conf t
  5. Locate the extension key for the orphaned dvs switch in vcenter.  This is under the summary page for the dvs switch. In the screenshot, you can see it is Cisco_Nexus_1000V_1351344349
  6. In config mode on the Nexus 1000V, enter vmware vc extension-key Cisco_Nexus_1000V_1351344349 (this number will obviously vary)
  7. Save config, reboot the nexus.
  8. Delete the current vcenter extension key.  Browse to https://vcenterIP/mob/?moid=ExtensionManager
  9. You should see a key ID which is different on the page, then the key from the isolated dvs switch.  Copy this key so you can unregister it.
  10. Click unregister extension.  Copy and paste in the key from step 9.  Click Invoke Method.  It should say void if this is successfull.
  11. Restart vCenter.
  12. Now you need to get the extension key from the Nexus 1000V for the dvs switch you are recovering and import it into vCenter.  Browse to http://VSM IP address.  Right click save on the Cisco Nexus 1000V Extension xml link
  13. Open the vcenter client.  Click on plugins, manage plug ins.  Right click, add new plugin and upload the xml file. 
  14. Go back to the Nexus 1000V.  In config mode, type in svs connections [vcenter connection name].  Type in connect.  The connection to vcenter should go up now that the extension key matches on both sides.  If you browse back to the vcenter extensions page, you'll now see the correct extension key ID.  
You can also confirm your nexus 1000V connection to vcenter.  You should now have regained control over the previously orphaned DVS. The VSM and dvs should also have now synced configurations.

At this point, if you want to entirely remove the vds, and start by scratch by pulling the vem software off esxi, deleting the VSMs, and starting over fresh, you'll want to get the vds out first.

On the Nexus 1000V, go into conf t, svs connection [vcenter connection name] then enter in no vmware vds.  This will delete the vds from the vmware side.

Vsphere and Nexus 1000V-Removing Orphaned VEMs

A few days ago, I had been working with my lab on setting up the Cisco Nexus 1000V virtual distributed switches.  After finishing with my lab, I decided it was time to remove the Nexus 1000V-it doesnt work to great when you can only use it on one host.  One of my servers has disk drives, the other is diskless and uses a SAN. I only have 2 nics one of which is used by the SAN, so it appears impossible to migrate it over to the Nexus with only 1 nic, or vcenter loses connectivity during the process.  I found this out the hard way and had to reset my virtual networking a few times using DCUI..

So, I deleted my VSM VM's, before removing the VEM modules from the hosts.  I learned that this is definitely not the correct procedure for removing the Nexus 1000V.  The VEMs can't be deleted from the vsphere inventory, and re-installing the VSM's does not appear to be possible through the Cisco installer app as it detects the VEM presence on the host!  This was an incredibly frustrating process until I learned how to forcefully remove the VEM from the host.  While I was following the vmware KB's on the subject, I kept receiving an error that a cisco tar file was active, the vib could not be removed, and that a reboot was required.  Additionally, as I found out later, I ended up with an orphaned vds object in vCenter.  Deleting the VSM's first is not the way to go.

So, this procedure is for removing orphaned VEM's from ESXi hosts.  It requires use of the ESXi console.  First, make sure that all physical adapters are migrated off the orphaned VEM, and that there are no virtual machines still attached to it.  Put the host into maintenance mode.  Open up the DCUI on the ESXi host, and enable the command line/service console under troubleshooting options.  Once this is enabled, press ALT-F1 to open the console.

First, check the VEM status by entering in the command vem status
If you see that it is running, enter in vem stop
Now let's verify that there are no more vem processes running and using system resources-if there are, the next series of commands will fail and you'll get to reboot your ESXi host and start these all over again!

By making use of the linux lsof command, you can see if there are any vem processes using system files or network resources.  If you don't see anything on your screen, there are no more vem processes active, using system file resources.  If you do see any, its time to force kill these processes.

lsof | grep vem

kill -9 [vem pid]

Now, enter the command esxcli software vib list | grep Cisco
This command lists the Cisco vib files installed on the host for the VEM.  When you have the output, enter in the next command to remove it.

esxicli software vib remove -n [vib name]

This could take some time to complete, it will look like the screen has hung.  This should successfully finish.  Reboot the host and bring it out of maintenance mode.  You should no longer have the VEM installed on this specific host anymore.  After this process, I was able to run the Cisco installer app for the VSM and re-install the VSM and VEMs-however you will be greeted with a new problem.  The vds object in vCenter will not connect with the new VSM-see my next post on how to clear up this issue!

As a note, I am using vsphere 5.5 and had the Nexus 1000v version 4.2.1.SV2.2.1.a and vCenter is running as the VCSA.


Wednesday, January 1, 2014

Lab Overhaul

Its been some time since I last posted here, however in the time that has passed, I have finished overhauling my lab.  I have now setup a vsphere 5.5 cluster utilizing two diskless dell c1100 servers, each with 72GB of RAM and dual quad core xenon processors.  The servers boot esxi via flash drives.  The servers I have tied to a SAN via iSCSI with a total of 3.6TB of usable storage (split into smaller sized LUNs for use as data stores) in a RAID 10 configuration, giving me plenty of storage room for a lab with decent performance.  I also finally bought a proper rack to install all of my equipment into!

I wanted to experiment more with network virtualization, so I obtained the Cisco Nexus 1000k essentials virtual switches (these are free, you just need a valid CCO login) and replaced the standard v switches.  The servers data NICs are setup via trunks to my 3550 L3 switches and from there route through the rest of my lab network.  The servers have dedicated gig uplinks to the SAN.

All of my routing practice will be done now in a GNS3 VM that I have built-the physical routers I have are just too outdated now to really be of much use.  I'm probably going to sell all of my routers and buy myself a nice gigabit Cisco L3 switch-anyone want to donate a Nexus? :)

Future Posts

At this point, I've mostly been studying for classes as a part of my bachelor's degree program (and I already have all of the Cisco courses completed), so I haven't put much emphasis at this point in knocking out another Cisco cert.  But I still want to keep posting my work and labs here, so I will continue to do so, it may just not necessarily be Cisco related.

Lately I've been very interested in security as it applies to virtual infrastructure, so the content of my future posts will likely be orientated in that area-trying out different deployments for securing and monitoring virtual environments, such as Cisco's virtual secure gateway, vmware's vshield, etc.