Archive for the ‘open source’ Category

Guided Tour of the Precursor Motherboard

Friday, September 25th, 2020

We talk a lot about “verifiable hardware”, but it’s hard to verify something when you don’t know what you’re looking at. This post takes a stab at explaining the major features of the Precursor motherboard by first indicating the location of physical components, then by briefly discussing the rationale behind their curation.

Above is a photo of a pre-production version of Precursor, annotated with the location of key components. Like software, hardware has revisions too. So, when verifying a system, be sure to check the revision of the board first. The final production units will have a clear revision code printed on the back side of every board and we’ll tell you where to look for the code once the location is finalized. There will be a few changes to the board before production, which we’ll talk about later on.

But what do all the components do, and how are they connected? Above is a block diagram that tries to capture the relationship between all the components.

Trusted and Untrusted Domains

First and foremost, you’ll notice that the design is split into two major domains: the “T-domain” and the “U-domain”. “T” stands for “Trusted”; “U” stands for “Untrusted”. A simplified diagram like this helps to analyze the security of the system, as it clearly illustrates what goes into and out of the T-domain; in other words, it defines the hardware attack surface of the trusted domain. Of course, not shown explicitly on the diagram are the side-channels, such as RF emissions and power fluctuations, which can be used to exfiltrate secret data. Very briefly, RF emissions are mitigated by enclosing the entire T-domain in a Faraday cage. Meanwhile, power fluctuations are mitigated partially through local filtering and partially through the use of constant-time algorithms to perform sensitive computations.

As the “Trusted” name implies, the T-domain is where the secrets go, while the U-domain acts as a first-level firewall to the untrusted Internet. The U-domain is explicitly designed for very low power consumption, so that it can be “always on” while still providing several days of standby time. We refer to the FPGA inside the U domain as the Embedded Controller (EC), and the FPGA inside the T-domain as the System on Chip (SoC) or sometimes simply as “the FPGA”.

Power Management and the Embedded Controller (EC)

The intention is that the always-on EC listens for incoming wifi packets; only once a valid packet is received will the T-domain be powered on.

Using a low-power EC separate from the SoC allows power-hungry processing to be done in bursts, after which the T-domain powers itself off. Thanks to the “memory LCD” that we have chosen, the display can appear persistently even when the T-domain is powered down. Of course, leaving data on the screen while the T-domain is powered down is a potential security risk, but users can adjust the power policy to trade off between security and battery life based on their particular use case and threat scenario. We anticipate that the T-domain running full bore with no power management would exhaust an 1100 mAh battery in about 6-7 hours. Any time spent in an idle state will greatly extend the battery life; thus for a hypothetical messaging application where the CPU is only active during periods of typing and data transfer, one should be able to achieve a full day of use on a single charge.

Mapping the T-Domain Attack Surface

Extending the boundary of trust to include human-facing I/O is a core tenet of the Precursor secure design philosophy. Thus, the T domain also includes the keyboard, LCD, and audio elements. This is because deferring the rendering of messages to an untrusted display means that any cryptography used to secure messages can be trivially defeated by a screen scraper. Delegating keystrokes to an untrusted touch controller likewise offers a quick work-around for capturing outgoing secrets through a keyboard logger. To mitigate/prevent this, Precursor incorporates an LCD that can be verified with an optical microscope and a physical keyboard that is trivial to verify with the naked eye. Precursor also forgoes an integrated microphone and instead favors a 3.5mm headphone jack, thus putting users solidly in control of when the device may or may not have the ability to record a conversation.

The green boxes in the block diagram above are connectors. These are items that plug into components that are not integrated into the mainboard. With this in mind, we can define the attack surface of the T-domain. We can see that we expose GPIO, USB, and JTAG to external connectors. We also have a bus to the U-domain that we call the COM bus, as well as a pair of quasi-static pins to communicate power state information and a set of pins to monitor the keyboard for user wake up events. Let’s explore each of these attack surfaces in a little more detail.

  1. JTAG A user is required to glue shut the JTAG port when the system needs to be sealed and secrets made inaccessible. This is done by placing a metal shield can over the T-domain and dabbing a specially formulated epoxy into the holes. This simultaneously completes the Faraday cage which reduces side band emissions while making the JTAG port more difficult to access.
  2. GPIOs and USB In its default configuration, the GPIOs are inert, and thus a difficult attack surface. We also advocate leaving the USB pins disconnected for secure applications; however, developers may opt to wire them up inside the FPGA, at the risk of opening up the expansive USB attack surface.
  3. Raw Power Input The primary postulated attack surface resulting from the raw power input are glitches. Denial of service is of course also an issue, by removing power or by destroying the system by applying too high a voltage; but these are beyond the scope of this discussion. The primary countermeasure against raw power input glitches is a reset monitor that will extend any glitch into a several-millisecond long reset signal if the voltage drops below a prescribed level. Furthermore, local filtering, regulation and power storage removes very short glitches. All T-domain power signals are routed so they are fully contained within the T-domain shield can. No T-domain power signals are exposed as outer-layer traces or vias on either the top or back side of the PCB outside of the T-domain shield.
  4. Power State Pins The power state pins allow the EC to coordinate with the FPGA SoC on the current power state. They are structured as “read only” from the SoC, and are also considered to be “advisory”. In other words, the SoC is capable of independently forcing its own power into the on-state; therefore the EC is only able to shut down power to the SoC when it is explicitly allowed by the T-domain. This minimizes the risk of the EC attempting to perform a glitch attack against the SoC by manipulating its access to power.
  5. Keyboard Wakeup Pins In order for the EC to know when to power on the system, the EC also has access to a pair of row/column pins on the keyboard matrix. This enables the EC to respond to a two-key chord to wake the system from sleep; however, it also means the EC can potentially monitor a few keys on the keyboard, leading to a potential information leakage. This is mitigated by a set of hardware isolation switches which the SoC uses to deny EC access to the keyboard matrix once the system is powered on.
  6. Audio is rendered by way of a CODEC chip. The DVT prototype shown in the photo above uses the LM49352, but a few months ago it was announced to be end-of-life by the vendor, TI. For production, we plan on employing the TLV320AIC3100, a functionally equivalent CODEC which will hopefully have a longer production lifetime. The CODEC chip integrates all the circuitry necessary to amplify the microphone, drive a pair of headphones, and also drive a small speaker for notifications. While it is possible to bury implants within the audio chip, it’s thought that any implant large enough to either record a useful amount of conversation or to do speech-to-text processing of the conversation would create an easily detectable size or power signature, or both. The headphone jack is wired for optimum compatibility with headsets from the Android ecosystem.
  7. COM bus Finally, the COM bus is an SPI interface used by the T-domain to talk to the rest of the world. It is directly connected to the EC. The COM bus is structured so that the SoC is the sole controller of the SPI bus; the EC is not able to send data to the SoC unless the SoC allows it. Further packet-level and protocol-level countermeasures are required on the COM bus to harden its attack surface, but at the end of the day, this is the primary pathway for data to reach the T-domain from the outside world, and therefore it should be the primary focus of any software-oriented attack surface analysis.

It is important that COM bus packets be authenticated, encrypted, and serialized prior to hand-off to the EC; the EC can only put T domain data into the appropriate envelopes for routing on the Internet and no more. This allows us to safely delegate to the EC the job of mapping COM bus packets onto a given network interface.

COM Connects to the Internet

Secure software running on the T-domain should be as oblivious as practical as to what type of Internet connection is implemented by the EC. Thus whether the EC routes COM packets to wifi, LTE, bluetooth, or Ethernet should have no bearing on the security of the T-domain.

For Precursor, we have chosen to add a Silicon Labs WF200 wifi chip to the EC as a primary means of Internet connectivity. The Silicon Labs WF200 contains a substantial amount of un-trustable code and circuitry; however, because the WF200 is in the Untrusted domain, we have no need to trust it, just as we have no need to trust the cable modem or the core network routers on the Internet.

Thus we can safely leverage the substantial co-processing within the WF200 to handle the complications of associating with WAPs, as well as other MAC/PHY-level nuances of wireless Ethernet. This allows us to substantially reduce the power requirements for the system during “screen off” time when it is mainly waiting to receive incoming messages. Furthermore, the WF200 has a well-characterized low power mode which agrees well with bench measurements. This is different from the ESP32, which as of a year ago when the evaluation was done, advertises low power but suffers from power-state transition nuances that prevent a practical system from achieving overall low power consumption.

The EC takes care of uploading firmware to the WF200, as well as servicing its interrupts and transcribing received packets to the T-domain. In addition to these responsibilities, the EC can detect if the system has been physically moved during standby by polling an IMU, and it also manages the battery charger and gas gauge. It also provides a ~1Hz square wave to the LCD that is required by the LCD during standby to continue displaying messages properly.

Random Number Generators

The T-domain includes a discrete TRNG. This is meant to complement a TRNG integrated into the SoC itself. The benefit of a discrete TRNG is that it can be verified using common lab equipment, such as an oscilloscope; the drawback of a discrete TRNG is that an attacker with physical possession of the device could manipulate its output by drilling through the RF shield and dropping a needle onto millimeter-scale component pads.

The integrated TRNG inside the SoC is less vulnerable to attack by a physically present attacker, but at the expense of being difficult to manually verify. Thus, we provision both discrete and integrated TRNGs, and recommend that developers combine their outputs prior to use in secure applications.

Keeping Time

A sense of time is important in many cryptographic protocols, thus a Real Time Clock (RTC) is a security-critical element. We chose an RTC that integrates both the crystal and the clock chip into a single hermetically sealed package to reduce the attack surface available to a physically present attacker to manipulate time. The chosen RTC also incorporates basic clock integrity checking, which helps to mitigate simple glitch attacks against the RTC.

RAM: Why 16MiB?

We provide 16MiB of battery-backed SRAM for secure computations. We made it battery-backed so as to reduce the standby/resume overhead of the system, at the expense of creating a potential attack surface for physically present attackers to recover data from the system.

The choice of 16MiB of SRAM was deliberate and motivated by several factors:

  1. Power A larger DRAM would have required using the DRAM PHY on the SoC. This interface is extremely power hungry and would have more than doubled the amount of power consumed when the system is on. Furthermore, keeping the DRAM in self-refresh mode would disallow powering down the FPGA entirely, meaning that the substantial standby leakage power of the SoC would count against the “screen-off” time.
  2. Code complexity Precursor is a spin-off from the Betrusted project. One of Betrusted’s goals is to build a codebase that could be audited by an individual or small group within a reasonable amount of time. Choosing a small amount of RAM is the equivalent of burning the boats before a battle to force an advancing army into a win-or-die situation; it confines every choice made in the OS and application layers to prefer simpler, less complex implementations at the expense of more development time and fewer features.
  3. Roadmap Eventually, we would like to fit the entire T-domain of Precursor into the footprint of a single chip. Incorporating hundreds of megabytes of RAM on-chip is impractical, even in aggressive process nodes. In a more realistic 28 or 40nm node, we estimate 4-16MiB is a potentially practical amount of RAM to incorporate in a low-cost, low-power, mass-market implementation. Provisioning Precursor with a similar amount of RAM helps to ensure code developed for it will have a migration path to more highly integrated solutions down the road.

Self-Destruct Mode

Finally, we have provisioned a “self-destruct” feature for users that opt to use battery-backed AES keys to protect their FPGA image. The “self-destruct” mechanism consists of a latch built using discrete transistors. During normal power-on, the system latches into a “normal” mode of operation. However, when the SoC asserts the “KEY_KILL” pin, the latch switches into the “kill” mode of operation. Once in the “kill” mode, power is cut to the T-domain – including the power that backs up the AES key. There is also a set active pull-downs which rapidly discharge the relevant voltage rails to ensure the power lines drop to a level suitable for data erasure in a matter of milliseconds. Although the data erasure only takes a fraction of a second, the only way to get out of “kill” mode is to remove the battery or to wait for the battery to fully discharge.

That wraps up our whirlwind tour of the Precursor motherboard. This post introduced all of the major design features of the Precursor motherboard and briefly summarized the rationale for each choice. The system architecture minimizes the attack surface of trusted components. Furthermore, component choice was guided by the principles of simplicity and transparency while trying to provide a complete but auditable solution for security-sensitive applications. Finally, the mainboard was designed with components only on one side, and all security-critical components are contained within a well defined area, with the hope that this makes it easier to visually inspect and verify units upon receipt by end users.

Liked this post? Sign up to the Precursor funding campaign mailing list to be notified when new posts go live!

Introducing Precursor

Saturday, September 19th, 2020

Precursor (pre·​cur·​sor | \ pri-ˈkər-sər):
1. one that precedes or gives rise to; a predecessor; harbinger
2. a pocketable open development board

Precursor is a mobile, open source electronics platform. Similar to how a Raspberry Pi or an Arduino can be transformed into an IoT gadget with the addition of a couple breakout boards, some solder, and a bit of code, Precursor is a framework upon which you can assemble a wide variety of DIY mobile applications.

Precursor is unique in the open source electronics space in that it’s designed from the ground-up to be carried around in your pocket. It’s not just a naked circuit board with connectors hanging off at random locations: it comes fully integrated—with a rechargeable battery, a display, and a keyboard—in a sleek, 7.2 mm (quarter-inch) aluminum case.

Precursor → Betrusted

Followers of my blog will recognize the case design from Betrusted, a secure-communication device. It’s certainly no accident that Precursor looks like Betrusted, as the latter is built upon the former. Betrusted is a great example of the kind of thing that you (and we) might want to make using Precursor. Betrusted is a huge software project, however, and it will require several years to get right.

Precursor, on the other hand, is ready today. And it has all of the features you might need to validate and test a software stack like the one that will drive Betrusted. We are also using the FPGA in Precursor to validate our SoC design, which will eventually give us the confidence we need to tape out a full-custom Betrusted ASIC, thereby lowering production costs while raising the bar on hardware security.

In the meantime, Precursor gives us a prototyping platform that we can use to work through user-experience challenges, and it gives you a way to implement projects that demand a secure, portable, trustable communications platform but that might not require the same level of hardware tamper resistance that a full-custom ASIC solution could provide.

And for developers, the best part is that Betrusted is 100% open source. As we make progress on the Betrusted software stack, we will roll those improvements back into Precursor, so you can count on a constant stream of updates and patches to the platform.

Hackable. In a Good Way.

Precursor is also unique in that you can hack many aspects of the hardware without a soldering iron. Instead of a traditional ARM or AVR “System on Chip” (SoC), Precursor is powered by the software-defined hardware of a Field Programmable Gate Array (FPGA). FPGAs are a sea of basic logic units that users can wire up using a “bitstream”. Precursor comes pre-loaded with a bitstream that makes the FPGA behave like a RISC-V CPU, but you’re free to load up (or code up) any CPU you like, be it a 6502, an lm32, an AVR, an ARM, or something else. It’s entirely up to you.

This flexibility comes with its own set of trade-offs, of course. CPU speeds are limited to around 100 MHz, and complexity is limited to single-issue, in-order microarchitectures. It’s faster than any Palm Pilot or Nintendo DS, but it’s not looking to replace your smartphone.

At Its Core

We describe bitstreams using a Python-based Fragmented Hardware Description Language (FHDL) called Migen, which powers the LiteX framework. (Migen is to LiteX as GNU is to Linux, hence we refer to the combination as Migen/LiteX.) The framework is flexible enough that we can incorporate Google’s OpenTitan SHA and AES crypto-cores (written in SystemVerilog), yet powerful enough that we can natively describe a bespoke Curve25519 crypto engine.

If you’ve ever wanted to customize your CPU’s instruction set, experiment with hardware accelerators, or make cycle-accurate simulations of retro-hardware, Precursor has you covered. And the best part is, thanks to Precursor’s highly integrated design philosophy, you can take all that hard work out of the lab and on the road.

On the Inside

And if you’re itching for a excuse to break out your soldering iron or your 3D printer, Precursor is here to give you one. While its compact form factor might seem limiting at first, we’ve observed that 80% of projects involve adding just one or two domain-specific sensors or hardware modules to a base platform. And most of those additions come on breakout boards that require only a handful of signal wires.

With eight GPIOs (configurable as three differential pairs and two single-ended lines) connected directly to the FPGA, Precursor’s battery compartment is designed to accommodate breakout boards. It also provides multiple power rails. You will find any number of third-party breakout boards with sensors ranging from barometers to cameras and radios ranging from BLE to LTE. Patch them in with a soldering iron, and you’re all set. The main trade-off is that, the more hardware you add, the less space you have left for your battery. Unless of course you build a bigger enclosure…

On the Outside

If you need even more space or custom mounting hardware, the case is designed for easy fabrication using an aluminum CNC machine or a resin printer. Naturally, our case designs are open source, and the native Solidworks CAD files we provide are constructed such that the enclosure’s length and thickness are parameterized.

Furthermore, Precursor’s bezel is a plain old FR-4 PCB, so if your application does not require a large display and a keyboard, you can simply remove them and replace the bezel with a full-sized circuit board. By way of example, removing the LCD and replacing it with a smaller OLED module would make room for a much larger battery while freeing up space for the custom hardware you might need to build, say, a portable, trustable, VPN-protected LTE hotspot.

Come Have a Look!

If you’ve ever wanted to hack on mobile hardware, Precursor was made for you. By combining an FPGA dev board, a battery, a case, a display, and a keyboard into a single thin, pocket-ready package, it makes it easier than ever to go from a concept to a road-ready piece of hardware.

Precursor will soon be crowdfunding on Crowd Supply. Learn more about its specifications on our pre-launch page, and sign up for our mailing list so that you can take advantage of early-bird pricing when the campaign goes live.

Meta

We’ve decided to do an extended pre-launch phase for the Precursor campaign to gauge interest. After all, we are in the middle of an unprecedented global pandemic, and one of the worst economic downturns in recorded history. It might seem a little crazy to try and fund the project now, but it’s also crazy to try and build trustable hardware that can hold up against state-level adversaries. We make decisions not on what is practical, but what is right.

The fact is the hardware is at a stage where we are comfortable producing more units and getting it into the hands of developers. The open question is if developers have the time, interest, and money to participate in our campaign. Initial outreach indicates there might be, but we’ll only find out for sure in the coming months. Precursor is not cheap to produce; I am prepared to accept a failed funding campaign as a possible outcome.

We’re also carefully considering alternate sources of funding, such as grants from organizations that share our values (such as NLNet) and commercial sponsors that will not attach conditions that compromise our integrity (you’ll notice the Silicon Labs banner at Crowd Supply). This will hopefully make the hardware more accessible, especially to qualified developers in need, but please keep in mind we are not a big corporation. As individual humans like you, we need to put food on the table and keep a roof over our heads. Our current plan is to offer a limited number of early bird units at a low price — so if you’re like me and worried about making ends meet next year, subscribe to our mailing list so you can hopefully take advantage of the early bird pricing. And if you’re lucky enough to be in a stable situation, please consider backing the campaign at a higher pricing tier.

Over the coming months, I’ll be mirroring some of the more relevant posts from the campaign onto my blog, sometimes with additional commentary like this. There’s over two years of effort that have gone into building Precursor, and I look forward to sharing with you the insights and knowledge gained on my journey.

On Liberating My Smartwatch From Cloud Services

Saturday, July 25th, 2020

I’ve often said that if we convince ourselves that technology is magic, we risk becoming hostages to it. Just recently, I had a brush with this fate, but happily, I was saved by open source.

At the time of writing, Garmin is suffering from a massive ransomware attack. I also happen to be a user of the Garmin Instinct watch. I’m very happy with it, and in many ways, it’s magical how much capability is packed into such a tiny package.

I also happen to have a hobby of paddling the outrigger canoe:

I consider the GPS watch to be an indispensable piece of safety gear, especially for the boat’s steer, because it’s hard to judge your water speed when you’re more than a few hundred meters from land. If you get stuck in a bad current, without situational awareness you could end up swept out to sea or worse.

The water currents around Singapore can be extreme. When the tides change, the South China Sea eventually finds its way to the Andaman Sea through the Singapore Strait, causing treacherous flows of current that shift over time. Thus, after every paddle, I upload my GPS data to the Garmin Connect cloud and review the route, in part to note dangerous changes in the ebb-and-flow patterns of currents.

While it’s a clear and present privacy risk to upload such data to the Garmin cloud, we’re all familiar with the trade-off: there’s only 24 hours in the day to worry about things, and the service just worked so well.

Until yesterday.

We had just wrapped up a paddle with particularly unusual currents, and my paddling partner wanted to know our speeds at a few of the tricky spots. I went to retrieve the data and…well, I found out that Garmin was under attack.

Garmin was being held hostage, and transitively, so was access to my paddling data: a small facet of my life had become a hostage to technology.

A bunch of my paddling friends recommended I try Strava. The good news is Garmin allows data files to be retrieved off of the Instinct watch, for upload to third-party services. All you have to do is plug the watch into a regular USB port, and it shows up as a mass storage device.

The bad news is as I tried to create an account on Strava, all sorts of warning bells went off. The website is full of dark patterns, and when I clicked to deny Strava access to my health-related data, I was met with this tricky series dialog boxes:

Click “Decline”…

Click “Deny Permission”…

Click “OK”…

Three clicks to opt out, and if I wasn’t paying attention and just kept clicking the bottom box, I would have opted-in by accident. After this, I was greeted by a creepy list of people to follow (how do they know so much about me from just an email?), and then there’s a tricky dialog box that, if answered incorrectly, routes you to a spot to enter credit card information as part of your “free trial”.

Since Garmin at least made money by selling me a $200+ piece of hardware, collecting my health data is just icing on the cake; for Strava, my health data is the cake. It’s pretty clear to me that Strava made a pitch to its investors that they’ll make fat returns by monetizing my private data, including my health information.

This is a hard no for me. Instead of liberating myself from a hostage situation, going from Garmin to Strava would be like stepping out of the frying pan and directly into the fire.

So, even though this was a busy afternoon … I’m scheduled to paddle again the day after tomorrow, and it would be great to have my boat speed analytics before then. Plus, I was sufficiently miffed by the Strava experience that I couldn’t help but start searching around to see if I couldn’t cobble together my own privacy-protecting alternative.

I was very pleased to discovered an open-source utility called gpsbabel (thank you gpsbabel! I donated!) that can unpack Garmin’s semi-(?)proprietary “.FIT” file format into the interoperable “.GPX” format. From there, I was able to cobble together bits and pieces of XML parsing code and merge it with OpenStreetMaps via the Folium API to create custom maps of my data.

Even with getting “lost” on a detour of trying to use the Google Maps API that left an awful “for development only” watermark on all my map tiles, this only took an evening — it wasn’t the best possible use of my time all things considered, but it was mostly a matter of finding the right open-source pieces and gluing them together with Python (fwiw, Python is a great glue, but a terrible structural material. Do not build skyscrapers out of Python). The code quality is pretty crap, but Python allows that, and it gets the job done. Given those caveats, one could use it as a starting point for something better.

Now that I have full control over my data, I’m able to visualize it in ways that make sense to me. For example, I’ve plotted my speed as a heat map map over the course, with circles proportional to the speed at that moment, and a hover-text that shows my instantaneous speed and heart rate:

It’s exactly the data I need, in the format that I want; no more, and no less. Plus, the output is a single html file that I can share directly with nothing more than a simple link. No analytics, no cookies. Just the data I’ve chosen to share with you.

Here’s a snippet of the code that I use to plot the map data:

Like I said, not the best quality code, but it works, and it was quick to write.

Even better yet, I’m no longer uploading my position or fitness data to the cloud — there is a certain intangible satisfaction in “going dark” for yet another surveillance leakage point in my life, without any compromise in quality or convenience.

It’s also an interesting meta-story about how healthy and vibrant the open-source ecosystem is today. When the Garmin cloud fell, I was able to replace the most important functions of it in just an afternoon by cutting and pasting together various open source frameworks.

The point of open source is not to ritualistically compile our stuff from source. It’s the awareness that technology is not magic: that there is a trail of breadcrumbs any of us could follow to liberate our digital lives in case of a potential hostage situation. Should we so desire, open source empowers us to create and run our own essential tools and services.

Edits: added details on how to take data off the watch, and noted the watch’s price.

A Near-Ultrasound (NUS) Data Link

Wednesday, July 8th, 2020

We were requested to investigate “near ultrasound” (NUS) links as part of our research on developing the Simmel reference design for a privacy-preserving COVID-19 contact tracing device. After a month of poking at it, the TL;DR is that, as suspected, the physics of NUS is not conducive to reliable contact tracing. While BLE has the problem that you have too many false positive contacts, NUS has the problem of too many false negatives: pockets, purses, and your own body can effectively block the signal.

That being said, we did develop a pretty decent-performing NUS data link, so we’ve packed up what we did into an open source reference design that you can clone and use in your own projects.


Top trace: demodulated data at 1 meter, 50dB background noise. Bottom trace: raw signal, normalized so it is visible. Without normalization the trace just looks like a flat line.

I imagine one use for this would be a way to provision IoT devices: the “how do I get wifi credentials into an IoT device that lacks both screen and keyboard?” problem. With the addition of a ~$1 microphone to a Cortex-M4 class device, you get a short-range data link to a host device, such as a phone. You can use a web page (via Javascript) to generate the modulated audio directly (relevant example), thus bypassing a host of multi-platform issues, or you can generate a file off-line and send it to any standard music player.

The TL;DR on the link is it uses a 20,833Hz carrier modulated with BPSK. We use PSK31 coding, so our baud rate is ~651 symbols per second (this is the 1/0 symbol rate before Varicode encoding). This isn’t breaking any speed records, but it’s good enough to send a UUID and some keys over the air in a couple seconds. Tests show decent performance over a distance of 1 meter with about 60dB ambient noise (normal conversation or background music playing at the same time).

The demodulator uses a Costas loop. We’ve documented its details, including comments on porting to other chipsets than the NRF52.

We also have a reference modulator using a non-linear transducer (e.g. a piezo element), which uses some of the more advanced features of the NRF52 PWM block to eliminate audible sidebands. We also have a rough C program to generate a .wav file, which needs to be run through a high-pass filter using e.g. Audacity to eliminate the low-frequency modulation sidebands; but the resulting .wav file can be played directly on your smartphone and it will demodulate correctly.


Acknowledgements: Sean ‘xobs’ Cross is an equal contributor to this research. This research is funded through the NGI0 PET Fund, a fund established by NLnet with financial support from the European Commission’s Next Generation Internet programme, under the aegis of DG Communications Networks, Content and Technology under grant agreement No 825310.

Can We Build Trustable Hardware?

Friday, December 27th, 2019

Why Open Hardware on Its Own Doesn’t Solve the Trust Problem

A few years ago, Sean ‘xobs’ Cross and I built an open-source laptop, Novena, from the circuit boards up, and shared our designs with the world. I’m a strong proponent of open hardware, because sharing knowledge is sharing power. One thing we didn’t anticipate was how much the press wanted to frame our open hardware adventure as a more trustable computer. If anything, the process of building Novena made me acutely aware of how little we could trust anything. As we vetted each part for openness and documentation, it became clear that you can’t boot any modern computer without several closed-source firmware blobs running between power-on and the first instruction of your code. Critics on the Internet suggested we should have built our own CPU and SSD if we really wanted to make something we could trust.

I chewed on that suggestion quite a bit. I used to be in the chip business, so the idea of building an open-source SoC from the ground-up wasn’t so crazy. However, the more I thought about it, the more I realized that this, too was short-sighted. In the process of making chips, I’ve also edited masks for chips; chips are surprisingly malleable, even post tape-out. I’ve also spent a decade wrangling supply chains, dealing with fakes, shoddy workmanship, undisclosed part substitutions – there are so many opportunities and motivations to swap out “good” chips for “bad” ones. Even if a factory could push out a perfectly vetted computer, you’ve got couriers, customs officials, and warehouse workers who can tamper the machine before it reaches the user. Finally, with today’s highly integrated e-commerce systems, injecting malicious hardware into the supply chain can be as easy as buying a product, tampering with it, packaging it into its original box and returning it to the seller so that it can be passed on to an unsuspecting victim.

If you want to learn more about tampering with hardware, check out my presentation at Bluehat.il 2019.

Based on these experiences, I’ve concluded that open hardware is precisely as trustworthy as closed hardware. Which is to say, I have no inherent reason to trust either at all. While open hardware has the opportunity to empower users to innovate and embody a more correct and transparent design intent than closed hardware, at the end of the day any hardware of sufficient complexity is not practical to verify, whether open or closed. Even if we published the complete mask set for a modern billion-transistor CPU, this “source code” is meaningless without a practical method to verify an equivalence between the mask set and the chip in your possession down to a near-atomic level without simultaneously destroying the CPU.

So why, then, is it that we feel we can trust open source software more than closed source software? After all, the Linux kernel is pushing over 25 million lines of code, and its list of contributors include corporations not typically associated with words like “privacy” or “trust”.

The key, it turns out, is that software has a mechanism for the near-perfect transfer of trust, allowing users to delegate the hard task of auditing programs to experts, and having that effort be translated to the user’s own copy of the program with mathematical precision. Thanks to this, we don’t have to worry about the “supply chain” for our programs; we don’t have to trust the cloud to trust our software.

Software developers manage source code using tools such as Git (above, cloud on left), which use Merkle trees to track changes. These hash trees link code to their development history, making it difficult to surreptitiously insert malicious code after it has been reviewed. Builds are then hashed and signed (above, key in the middle-top), and projects that support reproducible builds enable any third-party auditor to download, build, and confirm (above, green check marks) that the program a user is downloading matches the intent of the developers.

There’s a lot going on in the previous paragraph, but the key take-away is that the trust transfer mechanism in software relies on a thing called a “hash”. If you already know what a hash is, you can skip the next paragraph; otherwise read on.

A hash turns an arbitrarily large file into a much shorter set of symbols: for example, the file on the left is turned into “🐱🐭🐼🐻” (cat-mouse-panda-bear). These symbols have two important properties: even the tiniest change in the original file leads to an enormous change in the shorter set of symbols; and knowledge of the shorter set of symbols tells you virtually nothing about the original file. It’s the first property that really matters for the transfer of trust: basically, a hash is a quick and reliable way to identify small changes in large sets of data. As an example, the file on the right has one digit changed — can you find it? — but the hash has dramatically changed into “🍑🐍🍕🍪” (peach-snake-pizza-cookie).

Because computer source code is also just a string of 1’s and 0’s, we can also use hash functions on computer source code, too. This allows us to quickly spot changes in code bases. When multiple developers work together, every contribution gets hashed with the previous contribution’s hashes, creating a tree of hashes. Any attempt to rewrite a contribution after it’s been committed to the tree is going to change the hash of everything from that point forward.

This is why we don’t have to review every one of the 25+ million lines of source inside the Linux kernel individually – we can trust a team of experts to review the code and sleep well knowing that their knowledge and expertise can be transferred into the exact copy of the program running on our very own computers, thanks to the power of hashing.

Because hashes are easy to compute, programs can be verified right before they are run. This is known as closing the “Time-of-Check vs Time-of-Use” (TOCTOU) gap. The smaller the gap between when the program is checked versus when it is run, the less opportunity there is for malicious actors to tamper with the code.

Now consider the analogous picture for open source in the context of hardware, shown above. If it looks complicated, that’s because it is: there are a lot of hands that touch your hardware before it gets to you!

Git can ensure that the original design files haven’t been tampered with, and openness can help ensure that a “best effort” has been made to build and test a device that is trustworthy. However, there are still numerous actors in the supply chain that can tamper with the hardware, and there is no “hardware hash function” that enables us to draw an equivalence between the intent of the developer, and the exact instance of hardware in any user’s possession. The best we can do to check a modern silicon chip is to destructively digest and delayer it for inspection in a SEM, or employ a building-sized microscope to perform ptychographic imaging.

It’s like the Heisenberg Uncertainty Principle, but for hardware: you can’t simultaneously be sure of a computer’s construction without disturbing its function. In other words, for hardware the time of check is decoupled from the time of use, creating opportunities for tampering by malicious actors.

Of course, we entirely rely upon hardware to faithfully compute the hashes and signatures necessary for the perfect trust transfer of trust in software. Tamper with the hardware, and all of a sudden all these clever maths are for naught: a malicious piece of hardware could forge the results of a hash computation, thus allowing bad code to appear identical to good code.

Three Principles for Building Trustable Hardware

So where does this leave us? Do we throw up our hands in despair? Is there any solution to the hardware verification problem?

I’ve pondered this problem for many years, and distilled my thoughts into three core principles:

1. Complexity is the enemy of verification. Without tools like hashes, Merkel trees and digital signatures to transfer trust between developers and users, we are left in a situation where we are reduced to relying on our own two eyes to assess the correct construction of our hardware. Using tools and apps to automate verification merely shifts the trust problem, as one can only trust the result of a verification tool if the tool itself can be verified. Thus, there is an exponential spiral in the cost and difficulty to verify a piece of hardware the further we drift from relying on our innate human senses. Ideally, the hardware is either trivially verifiable by a non-technical user, or with the technical help of a “trustable” acquaintance, e.g. someone within two degrees of separation in the social network.

2. Verify entire systems, not just components. Verifying the CPU does little good when the keyboard and display contain backdoors. Thus, our perimeter of verification must extend from the point of user interface all the way down to the silicon that carries out the secret computations. While open source secure chip efforts such as Keystone and OpenTitan are laudable and valuable elements of a trustable hardware ecosystem, they are ultimately insufficient by themselves for protecting a user’s private matters.

3. Empower end-users to verify and seal their hardware. Delegating verification and key generation to a central authority leaves users exposed to a wide range of supply chain attacks. Therefore, end users require sufficient documentation to verify that their hardware is correctly constructed. Once verified and provisioned with keys, the hardware also needs to be sealed, so that users do not need to conduct an exhaustive re-verification every time the device happens to leave their immediate person. In general, the better the seal, the longer the device may be left unattended without risk of secret material being physically extracted.

Unfortunately, the first and second principles conspire against everything we have come to expect of electronics and computers today. Since their inception, computer makers have been in an arms race to pack more features and more complexity into ever smaller packages. As a result, it is practically impossible to verify modern hardware, whether open or closed source. Instead, if trustworthiness is the top priority, one must pick a limited set of functions, and design the minimum viable verifiable product around that.

The Simplicity of Betrusted

In order to ground the conversation in something concrete, we (Sean ‘xobs’ Cross, Tom Mable, and I) have started a project called “Betrusted” that aims to translate these principles into a practically verifiable, and thus trustable, device. In line with the first principle, we simplify the device by limiting its function to secure text and voice chat, second-factor authentication, and the storage of digital currency.

This means Betrusted can’t browse the web; it has no “app store”; it won’t hail rides for you; and it can’t help you navigate a city. However, it will be able to keep your private conversations private, give you a solid second factor for authentication, and perhaps provide a safe spot to store digital currency.

In line with the second principle, we have curated a set of peripherals for Betrusted that extend the perimeter of trust to the user’s eyes and fingertips. This sets Betrusted apart from open source chip-only secure enclave projects.

Verifiable I/O

For example, the input surface for Betrusted is a physical keyboard. Physical keyboards have the benefit of being made of nothing but switches and wires, and are thus trivial to verify.

Betrusted’s keyboard is designed to be pulled out and inspected by simply holding it up to a light, and we support different languages by allowing users to change out the keyboard membrane.

The output surface for Betrusted is a black and white LCD with a high pixel density of 200ppi, approaching the performance of ePaper or print media, and is likely sufficient for most text chat, authentication, and banking applications. This display’s on-glass circuits are entirely constructed of transistors large enough to be 100% inspected using a bright light and a USB microscope. Below is an example of what one region of the display looks like through such a microscope at 50x magnification.

The meta-point about the simplicity of this display’s construction is that there are few places to hide effective back doors. This display is more trustable not just because we can observe every transistor; more importantly, we probably don’t have to, as there just aren’t enough transistors available to mount an attack.

Contrast this to the more sophisticated color displays, which rely on a fleck of silicon with millions of transistors implementing a frame buffer and command interface, and this controller chip is closed-source. Even if such a chip were open, verification would require a destructive method involving delayering and a SEM. Thus, the inspectability and simplicity of the LCD used in Betrusted is fairly unique in the world of displays.

Verifiable CPU

The CPU is, of course, the most problematic piece. I’ve put some thought into methods for the non-destructive inspection of chips. While it may be possible, I estimate it would cost tens of millions of dollars and a couple years to execute a proof of concept system. Unfortunately, funding such an effort would entail chasing venture capital, which would probably lead to a solution that’s closed-source. While this may be an opportunity to get rich selling services and licensing patented technology to governments and corporations, I am concerned that it may not effectively empower everyday people.

The TL;DR is that the near-term compromise solution is to use an FPGA. We rely on logic placement randomization to mitigate the threat of fixed silicon backdoors, and we rely on bitstream introspection to facilitate trust transfer from designers to user. If you don’t care about the technical details, skip to the next section.

The FPGA we plan to use for Betrusted’s CPU is the Spartan-7 FPGA from Xilinx’s “7-Series”, because its -1L model bests the Lattice ECP5 FPGA by a factor of 2-4x in power consumption. This is the difference between an “all-day” battery life for the Betrusted device, versus a “dead by noon” scenario. The downside of this approach is that the Spartan-7 FPGA is a closed source piece of silicon that currently relies on a proprietary compiler. However, there have been some compelling developments that help mitigate the threat of malicious implants or modifications within the silicon or FPGA toolchain. These are:

• The Symbiflow project is developing a F/OSS toolchain for 7-Series FPGA development, which may eventually eliminate any dependence upon opaque vendor toolchains to compile code for the devices.
Prjxray is documenting the bitstream format for 7-Series FPGAs. The results of this work-in-progress indicate that even if we can’t understand exactly what every bit does, we can at least detect novel features being activated. That is, the activation of a previously undisclosed back door or feature of the FPGA would not go unnoticed.
• The placement of logic with an FPGA can be trivially randomized by incorporating a random seed in the source code. This means it is not practically useful for an adversary to backdoor a few logic cells within an FPGA. A broadly effective silicon-level attack on an FPGA would lead to gross size changes in the silicon die that can be readily quantified non-destructively through X-rays. The efficacy of this mitigation is analogous to ASLR: it’s not bulletproof, but it’s cheap to execute with a significant payout in complicating potential attacks.

The ability to inspect compiled bitstreams in particular brings the CPU problem back to a software-like situation, where we can effectively transfer elements of trust from designers to the hardware level using mathematical tools. Thus, while detailed verification of an FPGA’s construction at the transistor-level is impractical (but still probably easier than a general-purpose CPU due to its regular structure), the combination of the FPGA’s non-determinism in logic and routing placement, new tools that will enable bitstream inspection, and the prospect of 100% F/OSS solutions to compile designs significantly raises the bar for trust transfer and verification of an FPGA-based CPU.


Above: a highlighted signal within an FPGA design tool, illustrating the notion that design intent can be correlated to hardware blocks within an FPGA.

One may argue that in fact, FPGAs may be the gold standard for verifiable and trustworthy hardware until a viable non-destructive method is developed for the verification of custom silicon. After all, even if the mask-level design for a chip is open sourced, how is one to divine that the chip in their possession faithfully implements every design feature?

The system described so far touches upon the first principle of simplicity, and the second principle of UI-to-silicon verification. It turns out that the 7-Series FPGA may also be able to meet the third principle, user-sealing of devices after inspection and acceptance.

Sealing Secrets within Betrusted

Transparency is great for verification, but users also need to be able to seal the hardware to protect their secrets. In an ideal work flow, users would:

1. Receive a Betrusted device

2. Confirm its correct construction through a combination of visual inspection and FPGA bitstream randomization and introspection, and

3. Provision their Betrusted device with secret keys and seal it.

Ideally, the keys are generated entirely within the Betrusted device itself, and once sealed it should be “difficult” for an adversary with direct physical possession of the device to extract or tamper with these keys.

We believe key generation and self-sealing should be achievable with a 7-series Xilinx device. This is made possible in part by leveraging the bitstream encryption features built into the FPGA hardware by Xilinx. At the time of writing, we are fairly close to understanding enough of the encryption formats and fuse burning mechanisms to provide a fully self-hosted, F/OSS solution for key generation and sealing.

As for how good the seal is, the answer is a bit technical. The TL;DR is that it should not be possible for someone to borrow a Betrusted device for a few hours and extract the keys, and any attempt to do so should leave the hardware permanently altered in obvious ways. The more nuanced answer is that the 7-series devices from Xilinx are quite popular, and have received extensive scrutiny over its lifetime by the broader security community. The best known attacks against the 256-bit CBC AES + SHA-256 HMAC used in these devices leverages hardware side channels to leak information between AES rounds. This attack requires unfettered access to the hardware and about 24 hours to collect data from 1.6 million chosen ciphertexts. While improvement is desirable, keep in mind that a decap-and-image operation to extract keys via physical inspection using a FIB takes around the same amount of time to execute. In other words, the absolute limit on how much one can protect secrets within hardware is probably driven more by physical tamper resistance measures than strictly cryptographic measures.

Furthermore, now that the principle of the side-channel attack has been disclosed, we can apply simple mitigations to frustrate this attack, such as gluing shut or removing the external configuration and debug interfaces necessary to present chosen ciphertexts to the FPGA. Users can also opt to use volatile SRAM-based encryption keys, which are immediately lost upon interruption of battery power, making attempts to remove the FPGA or modify the circuit board significantly riskier. This of course comes at the expense of accidental loss of the key should backup power be interrupted.

At the very least, with a 7-series device, a user will be well-aware that their device has been physically compromised, which is a good start; and in a limiting sense, all you can ever hope for from a tamper-protection standpoint.

You can learn more about the Betrusted project at our github page, https://betrusted.io. We think of Betrusted as more of a “hardware/software distro”, rather than as a product per se. We expect that it will be forked to fit the various specific needs and user scenarios of our diverse digital ecosystem. Whether or not we make completed Betrusted reference devices for sale will depend upon the feedback of the community; we’ve received widely varying opinions on the real demand for a device like this.

Trusting Betrusted vs Using Betrusted

I personally regard Betrusted as more of an evolution toward — rather than an end to — the quest for verifiable, trustworthy hardware. I’ve struggled for years to distill the reasons why openness is insufficient to solve trust problems in hardware into a succinct set of principles. I’m also sure these principles will continue to evolve as we develop a better and more sophisticated understanding of the use cases, their threat models, and the tools available to address them.

My personal motivation for Betrusted was to have private conversations with my non-technical friends. So, another huge hurdle in all of this will of course be user acceptance: would you ever care enough to take the time to verify your hardware? Verifying hardware takes effort, iPhones are just so convenient, Apple has a pretty compelling privacy pitch…and “anyways, good people like me have nothing to hide…right?” Perhaps our quixotic attempt to build a truly verifiable, trustworthy communications device may be received by everyday users as nothing more than a quirky curio.

Even so, I hope that by at least starting the conversation about the problem and spelling it out in concrete terms, we’re laying the framework for others to move the goal posts toward a safer, more private, and more trustworthy digital future.

The Betrusted team would like to extend a special thanks to the NLnet foundation for sponsoring our efforts.