Airvine WaveTunnel 60Ghz Indoor Backhaul

Airvine comes to life at Networking Field Day 23! They’ve been operating for quite some time and it was NFD23 where we see their big idea moving towards the public stage.

Their big idea is pretty simple, but it won’t be an easy execution. How do I get a network backbone to places where wiring isn’t feasible or even possible? In some scenarios you could force the issue of running wires, but if I can get nearly the same with wireless, then why not do that?

The Airvine WaveTunnel product will be positioned to do exactly that. Get an indoor up-link with 60ghz radios in a dual ring configuration. It’s not mesh Wifi and in fact it’s barely Wifi at all. WaveTunnel may have more in common with Ethernet and SONET than anything else. This indoor wireless system has an open range of 100m and it should easily beam through commercial drywall. I can think of some manufacturing floors where I needed to reach remote IDFs and this would’ve made a lot of sense. Airvine should also be able to build a use case for sports arenas.

Airvine still has challenges to solve before their first customer ship. They have an RF and Ethernet stack to complete while vendor interoperability will become a sticking point. It’s going to have to interact with spanning tree at some point and that’s just one place it will get interesting. We’ll find out those details down the road.

Beaming 60ghz through materials is more of a “Science!” problem than a networking problem. I think Airvine needs to invest in some materials science quality assurance and publish many real world test cases. The customer education cycle will need real world examples to immediately push through “Can it beam through X?”. Otherwise their entire sales cycle will be talking about concrete, metal, and water.

It’s certainly an interesting road ahead for Airvine and I’m looking forward to following their journey.

Learn more:

Getting Lost at Networking Field Day

This year is certainly one for the record books and especially so on the personal front. 2020 has been full of twists, turns, overuse of the word “unprecedented”, crisis management, and modern day urban survival.

What’s in store around the corner? Networking Field Day 23! I’m super excited this go around for #NFD23 and the opportunity to be fully engaged. It’s been easy to get lost this year with everything going on around us. I’m hoping to get my brain back into high gear and NFD23 is the super-fuel rocket boost out-of-this-world content explosion we all need. Well, I need anyway, your mileage will vary.

So what in the world does this have to do with getting lost? A short while ago I update the title of this blog to Foggy Bytes. It’s somewhat odd, but a play on a few things we deal with in the context switching world of networking, information technology, and our brain-fog aftermath. Human context switching is a productivity killer and generally speaking we’re having to do it on a hourly basis. We’re dealing with new projects, support issues, business changes, technology changes, and I overheard you shout “network automation” on the other side of this screen. So we’re left in this constant fog, if you will, of bytes-in and bytes-out through the fog of it all. I also relate to the blog title with a backdrop of imposter syndrome. We get the bytes moved, but not exactly sure of all the steps we took to get there or why we were chosen.

I’m also happy to say that my former “no media” policy has expired! Yay! I can have an opinion again! Not that I ever stopped having one, but now I can blog them again. If you’re changing jobs and you’re into any sort of content creation, even casually, be sure to understand any policies you may be subject to at your new position. I may even take some time to revisit NFD21 content and do some “where are they now” hot takes.

Networking Field Day 23 has an impressive line up and I’m looking forward to hear from everyone.

NFD23 is like a fresh Computer Shopper so be sure to follow along with us September 29–October 2, 2020

Subscribe to the channel!

Follow me on Twitter for live hot takes during #NFD23

Cisco Collab and Open VMware Tools

Hi! I’m back with a quick take on Cisco Collaboration UCOS 12.5 and switching to open VMware Tools.

There is a long and bumpy history with native VMware tools on Cisco UCOS collaboration applications.  If you have had your UC solution for many years, then it is likely you have bumped into issues along with the way.  Keeping the native VMware Tools upgraded, installed, and working can be a challenge during upgrades.  So I was very excited in 12.5 you now have the option to move straight to Open VMware Tools.

What are these Open VMware Tools? In short – the better VMware Tools.  The open-vm-tools package is 100% supported by both Cisco and VMware. Moving to open-vm-tools will not take you out of any ‘compliance’ or get you yelled at by TAC.

The first advantage is that you’re de-coupling the ESXi version of tools from the UCOS application.  This means you’ll no longer need the “Check and upgrade VMware Tools before each power on” setting on the guest machine.  The open-vm-tools package is built into CentOS6/7 (and many others) by default, so you’ll no longer rely on the ESXi side of the house to get this right.  For example, if the systems team upgrades ESXi underneath your collaboration application you won’t have the additional worry of VMware Tools staying in sync.

For Cisco this means they simply keep the open-vm-tools package in CentOS6/7 UCOS and can keep it in line during application maintenance. I think it’s a win-win for both sides.

So how do we get there? EASY – but you do require a REBOOT. WARNING! You need to ensure that the native VMware Tools are operational prior to switching to open-vm-tools.  You can check the status of VMware Tools from ESXi. If there is a warning about version or operating system selection you need to fix that first. Also, please be sure you’re working with the latest patched version of the UCOS software release if you’re doing VMware Tools maintenance. There are some bugs and field notices that may get you stuck. Patch stuff!

This is primarily geared for 12.5 so as of this post I’m assuming you’re working with 12.5 SU2. VMware will give you the installed and running status for the tools.

Check that you have native VMware Tools operational

admin:utils vmtools status

Type: native VMware Tools

Now prior to making the switch and the reboot make sure you or someone else has UN-checked VM Options “Check and upgrade VMware Tools during each power on”.

Move the system to permissive.  This will relax Linux with a setenforce command.  This isn’t called out as required in all locations, but I’d certainly put it in the “recommended” category when making this switch.  You’ll easily be able to move the system back to enforcing.

admin:utils os secure status
OS Security status: enabled
Current mode: enforcingadmin:utils os secure permissive
OS security mode changed to Permissive

Make the switch to open-vm-tools package which will remove the native.

admin:utils vmtools switch open

This will uninstall the native VMware Tools and install the open-vm-tools.
The system will be rebooted automatically.
Do you want to proceed (yes/no) ? yes

The UCOS server will reboot and switch out the native for the open-vm-tools package. VMware will now show both installed and status, but it will read “VMware Tools is not managed by vSphere”.  You can also check at the UCOS CLI again with

admin:utils vmtools status

Type: open-vm-tools

Don’t forget to change back to enforcing!

admin:utils os secure enforce

In conclusion I think this was a good move by Cisco to bring the VMware Tools swap exposed natively with a UCOS CLI.  Getting away from those native tools and just getting it managed within CentOS is great.

I believe this will remain ‘Optional’ for quite a while, but it’s possible this will become a required changed for CSR14.

For more information from VMware about Open-VM-Tools check out the repo

Hit me up @Warcop on Twitter – Thanks!


Extra VMware Tools troubleshooting.

Got root? (Recovery ISO | Atl+F2) Make sure you’re working in the active partition by checking timestamps on /mnt/part1 or /mnt/part2.

/usr/bin/ will remove the tools

If you’re in a situation where you don’t have VMware tools in the active partition, then copy them from the inactive side. It’s likely they’re still over there. Find ‘vmware-tools’ directories on the inactive side and copy them over so that you can run the /usr/bin scripts.

Did you get stuck on a reboot and the only thing on the console is “Probing EDD”. Welcome to Field Notice FN70379 where you’ll need to generate new initramfs with “/usr/bin/ -d”