Building a microcluster

Matt Bernhard
5 min readJan 31, 2021

--

I frequently have a need to do fairly large computations™. In the past, this has relied on using single machines with many cores, for example when we needed simulations of 10 million elections, each with a million cast ballots over 500 different margins of victory. However, even on a beefy server, this took a long time (and resulted in me accidentally running up an AWS bill of $5,000). Ideally, I’d like to have my own compute machines to avoid the spurious AWS bills, and even if my local compute isn’t enough, it is at least useful for test runs to make sure my code does what it should.

Recently my colleague Dan Wallach built a really cool implementation of VAULT, end-to-end encryption for use in post-election ballot comparison audits. Doing encryption over a large set of ballots is not that different than running simulations of elections, since both require vectorizing ballots, doing computationally heavy things like generating randomness and doing math. Since post-election audits are fairly time sensitive, it would be great to do all of the VAULT encryption and verification work in this lifetime. To satisfy this constraint, Dan turned to Ray, a distributed computation engine that seeks to abstract nearly everything about compute scaling (see Dan’s talk about this experience here). As luck would have it, I inherited maintenance of this implementation, which means I finally have a valid excuse to build the home cluster I’ve always dreamed of.

Hardware

To facilitate my home cluster, I’d like to not break the bank. I settled on playing around with Pine64’s RockPro64 single-board computer platform as a jumping off point. Lots of people have built clusters with Raspberry Pis of various flavors, but the Pine64 boards are a bit more computationally beefy for not that much more money (6 cores instead of 4, slightly better clock speeds, and I really like what Pine64 is doing in general). As with most single board computers (SBCs), these are 64-bit ARM chips, with fairly minimal specs otherwise.

A picture of the RockPro64 SBC
A Rockpro64 SBC (photo courtesy of Pine64)

My original microcluster attempt involved a TP-Link switch and a cloudlet case that had to be modified to support the RockPro64s. I powered everything with a fairly unique UPS with barrel jack outputs at various voltages. Unfortunately, the fans provided with the cloudlet case weren’t up to the task of cooling my RockPro64s, so I wound up seeking other alternatives.

In addition to the RockPro64s, Pine64 also sells a clusterboard, a mini-ITX board with 7 SO-DIMM slots for SOPINE compute modules, slightly less powerful 4-core ARM boards. The nice thing about the clusterboard is it handles power distribution and networking between boards, providing an eight gigabit backplane, meaning I only need one ethernet and power cable instead of seven each.

A Pine64 clusterboard next to a SOPINE SO-DIMM compute module
Clusterboard with SOPINE module, courtesy of Pine64.

The clusterboards are in fairly high demand, but I managed to snag one with seven of the modules. To house them, I got a Silverstone FARA R1 case (chosen so that I could mount several clusterboards on top of each other if I wanted), a bunch of 120mm fans, a way-too-beefy power supply (over provisioned for expandability), and a grab bag of heat sinks to help cool the modules.

After doing some test runs, I realized that the clusterboard couldn’t hold a candle to the power of five RockPro64s (even with thermal throttling of the RockPros due to the insufficient cooling of the cloudlet case), so I endeavored to find a way to mount everything in the Silverstone case. This required finding a breakout board and splitter for ATX 24-pin power cables, drilling some holes, and getting everything routed and installed. I also had to switch to a Linksys switch, as the TP-Link required 9v power and ATX doesn’t provide that without modifications. The breakout board I chose leaves a lot to be desired when routing more than a few cables, since it only provides two +12v screw terminals, which means cramming three positive wires into each one (one for the switch, five for the RockPros). In the future I will experiment with different means of distributing power, and I currently have a different breakout board on order that at least provides more terminals.

For cooling I experimented with putting the same heatsinks I got for the SOPINEs on two of the RockPros, only to realize that the low-profile graphene heat sinks sold by Pine64 work way better.

Black silverstone case housing the clusterboard, a stack of five rockpro64s, an ATX power breakout and splitter, and router
Everything mounted into the case.

I flashed a bunch of eMMCs I also got from Pine64 with Armbian, and we’re off to the races. Getting Armbian to work on the RockPro64 is a cinch. You flash the boot disk, pop it in, and boot the thing. If it’s connected to a network, SSH is already up and running and all you have to do is log in with default credentials, set up an account, and you’re done. On to the software!

Yak shaving

One major issue with using Ray on ARM is that there are no releases available to install via Pip. That means I had to build them myself. Actually, that means someone else had to and I just had to find their post in a Github thread on the Ray repo.

Once I got Ray installed, the next hurdle was attempting to take advantage of the neat cluster management tooling Ray provides. Essentially it rolls an Ansible-like framework into a command-line interface that deploys docker instances to whatever compute infrastructure you like. There’s support for most cloud providers as well as support for local clusters, and it makes pushing Ray-powered code out ridiculously easy. There’s a catch, though: the docker images provided by Ray are, you guessed it, only for x86_64, meaning I had to rebuild the docker stack for ARM as well.

Since I already had to gussy up the dockerfiles, I also made some other changes to tailor them more to my use case. Mostly, I stripped out Anaconda in favor of Pip, since that is what I’m most comfortable with. Doing this resulted in some PATH breakages that were fixed pretty easily with some bashrc modifications (I don’t usually like installing things through Pip as sudo, since if you ever need to change versions of Python or use multiple versions at once, it creates really thorny dependency issues). I haven’t gotten the full ML stack Ray provides in their ray-ml dockerfile, since I don’t really need anything it provides, but everything else has been ported and is available on Dockerhub. The dockerfiles are also hosted on Github, should you wish to modify them further.

That’s all for now. I’m still tweaking and testing things, but the fact that I can seamlessly deploy some fairly advanced vectorized computations at the snap of my fingers is really freakin’ cool. If you’re curious about what I’m actually doing with all of this, the code I’m playing with is also available on Github. As I continue to tweak the hardware I’ll try to update this post.

--

--

Matt Bernhard
Matt Bernhard

Written by Matt Bernhard

Not a professional driver on a closed course.

Responses (1)