ESXi Arm Edition on Raspberry Pi

Finally, the ESXi on ARM image has been released! I was begging Andrei Warkentin for it when I was preparing for VMworld 2019, because I wanted to keep the promise of “ANY DEVICE” in our demo during the breakout session HBO3559BE. Let’s pick this task from my backlog now.

Bill of Material, Basic Setup

Following the lead of William Lam, I ordered a Raspberry Pi 4b kit from my favorite Swiss online shop (plus a micro-HDMI to HDMI cable). I wanted to minimize costs so I did not buy to whole BOM as proposed by William. The fling itself is very well documented, other valuable input can be found on the blog of Brandon Lee. With all these great how-to’s, I can focus on the deviations in my setup and the bigger picture on how ESXi on ARM could be used.

I reduced the material to the max: I found some old USB 3.0 flash drives (64 and 128 GB), did not buy a fan and felt already well equipped to plugin the cables. Setup of the Pi and installing the ESXi was very easy, I had it running within 20 minutes. As I logged in with the vSphere Client, I realized that only the ESXi OS flash drive showed up. I had to dig a little bit into the awesome material of William and found the missing piece.

Step 2 – We need to disable the USB Arbitrator service, so that ESXi can see the two USB storage devices. To do so, SSH to rPI and run the following commands:

/etc/init.d/usbarbitrator stop
chkconfig usbarbitrator off

Cool, that’s an easy one so I could use my 128 GB flash drive as a datastore now. Later on it turned out that this memory stick was not stable enough to use it for concurrent read and write operations. It seems not to be a problem of speed. The stick suddenly was listed with a capacity of 0 bytes and did not work anymore. So I had to grab an external hard disk from my collection of rarely used gadgets and tried with it. This worked well and stable. Please check out the comments of Williams on power consumption and heat as well. I fortunately got some swag from the VMware Customer Advisory Board (CTAB) and in the end my ARM hardware looked very professional:

Adding the host to my home lab (4 x Intel NUC NUC7I7BNH , vCenter 7.0.1 Standard installation, powered by my vExpert license – no NSX-T for the first try), I got a “A general system error occurred: Unable to push signed certificate to host” error already documented in the troubleshooting section. So I had to add the NTP servers and start the service, reboot was needed, and then it worked like a charm:

This screenshot was done with the 128 GB Memory flash drive before I generated some load, this is a screenshot from the vSphere Client on the ARM host showing my external hard disk:

As a basic smoke test I created a content library from vCenter locally on the ARM external hard disk, uploaded an ubuntu install ISO, created a VM and cloned it as template back into the content library. Everything worked fine as on every other ESXi host and here you can see the advantage of this approach: you can leverage all monitoring and controlling features of vCenter. These are the monitoring diagrams for deploying a 4 GB ubuntu VM with 40 GB disk. The first half is the installation process from the ISO, the second half is during ssh in and trigger OS updates.

As the target use case for ESXi is said to be “far edge” I chose a very particular setup: I’m using a weak network connection of max 70 Mbps over a power line adapter. This was good enough to allow vCenter to manage the far edge ESXi host without problems.

Now that we found love, what are we gonna do with it?

Checking the limitations of the current implementation I was wondering where to go from here. William is documenting a use case to use the Pi as an inexpensive VSAN witness – very interesting for home labs. GPIO and other serial interfaces are not supported, so you cannot use the Raspberry ecosystem to control sensors or other devices directly. And I think this really makes sense if define it as a far edge device. Unfortunately also WIFI is not supported, yet.

What the FAQ is the Edge vs. the Far Edge?

Source: What the FAQ is the Edge vs. the Far Edge?

VMware is promoting this vision:

VMware Telco Edge Reference Architecture

Source: VMware Telco Edge Reference Architecture

Telco Edge Conceptual Architecture

Source: Telco Edge Conceptual Architecture

According to these concepts I assume that the interconnection of data center, near edge and far edge is rather on a internal and highly secured network, the far edge component could be viewed like an intelligent WLAN access point with additional authentication, authorization and onboarding features as well as preprocessing and management of attached IoT devices. The advantage of ESXi on ARM is the notion of consistent infrastructure throughout these layers, the disadvantage could be an overhead if you do not need all these management functions on the far edge. ARM is supported by many linux distributions directly and Kubernetes/Docker could deliver a standard runtime environment as well. Maybe ESXi on ARM will be enabled for optimized Kubernetes support as well, this could change these considerations quite a bit. As the far edge device might not be in a secured datacenter but within the end user’s reach, the connection back to the critical infrastructure of your datacenter should be secured at least with a VPN – something I have no idea, yet, on how to do that from ESXi.

For now, I leave the architectural view on the target use case here – I need to have additional discussions – and continue with the task from my backlog.

Use Case VMworld 2019: control Raspberry Pi from vRAC

Now I simply use vRealize Automation Cloud (SaaS offering) as the Cloud Management Platform for the Core Data Center, my little NUCs as the near edge and the new kid in town as the far edge. If you believe it or not, it takes only a few minutes to setup this additional layer.

You just have to create a new cloud account for a vCenter, download and install the cloud proxy to enable communication from the local vCenter to the SaaS installation, create some tags to control the deployment on ARM and a mapping for the templates to use. All these steps are documented for vRAC.

Then I created a simple blueprint to do a smoke test:

Now just deploy the blueprint and check the execution log:

You can see from the provisioning evaluation log, that the 4 Intel NUCs where skipped because they did not have the tag “ARM”, only the new family member got selected as potential target and the deployment is started from cloud.vmware.com.

So the core data center is working, let’s look at the near edge:

The “Cloud_vSphere_Machine-xxx” is popping up, created from the ubuntu 20.04.1 template. Now let’s check the far edge directly:

Here it is, ready to rumble. Smoke test successfully completed.

Planning a PoC

Luckily, I got a Raspberry Pi 3 last year at VMworld. This device will be my IoT device with which I plan to complete the whole use case. I hope I will find some spare time to create something useful for a demo of the whole chain, including automated builds on code stream and update scenarios for the components running on the far edge and on the IoT device.

Stay tuned!

Kudos to Andrei Warkentin and Jakub Bartisek for offering me some insights in project Monterey and holding my hand while the fling was not out, yet. And of course to William Lam for sharing so much very valuable content.

3 Comments

  1. Great post. Thank you. I was this [..] close to returning my USB3 drive as it would not work on the rPi 4 as a second drive and then you posted.

Leave a Reply

Your email address will not be published.


*