Baochip-1x: A Mostly-Open, 22nm SoC for High Assurance Applications

March 10th, 2026

One of my latest projects is the Baochip-1x, a mostly-open, full-custom silicon chip fabricated in TSMC 22nm, targeted at high assurance applications. It’s a security chip, but far more open than any other security chip; it’s also a general purpose microcontroller that fills a gap in between the Raspberry Pi RP2350 (found on the Pi Pico2) and the NXP iMXRT1062 (found on the Teensy 4.1).

It’s the latest step in the Betrusted initiative, spurred by work I did with Ed Snowden 8 years ago trying to answer the question of “can we trust hardware to not betray us?” in the context of mass surveillance by state-level adversaries. The Baochip-1x’s CPU core is descended directly from the FPGA SoC used inside Precursor, a device I made to keep secrets; designed explicitly to run Xous, a pure-Rust rethink of the embedded OS I helped write; and made deliberately compatible with IRIS inspection, a method I pioneered for non-destructively inspecting silicon for correct construction.

In a nutshell, the Baochip-1x is a SoC featuring a 350MHz Vexriscv CPU + MMU, combined with a I/O processor (“BIO”) featuring quad 700MHz PicoRV32s, 4MiB of nonvolatile memory (in the form of RRAM), and 2MiB of SRAM. Also packed into the chip are features typically found exclusively in secure elements, such as a TRNG, a variety of cryptographic accelerators, secure mesh, glitch sensors, ECC-protected RAM, hardware protected key slots and one-way counters.

The chip is fabricated using a fully-production qualified TSMC process using a dedicated mask set. In other words, this isn’t a limited-run MPW curiosity: Baochip’s supply chain is capable of pumping out millions of chips should such demand appear.

Hardware Built to Run High-Assurance Software

The Baochip-1x’s key differentiating feature is the inclusion of a Memory Management Unit (MMU). No other microcontroller in this performance/integration class has this feature, to the best of my knowledge. For those not versed in OS-nerd speak, the MMU is what sets the software that runs on your phone or desktop apart from the software that runs in your toaster oven. It facilitates secure, loadable apps by sticking every application in its own virtual memory space.

The MMU is a venerable piece of technology, dating back to the 1960’s. Its page-based memory protection scheme is well-understood and has passed the test of time; I’ve taught its principles to hundreds of undergraduates, and it continues to be a cornerstone of modern OSes.

Above: Diagram illustrating an early virtual memory scheme from Kilburn, et al, “One-level storage system”, IRE Transactions, EC-11(2):223-235, 1962

When it comes to evaluating security-oriented features, older is not always worse; in fact, withstanding the test of time is a positive signal. For example, the AES cipher is about 26 years old. This seems ancient for computer technology, yet many cryptographers recommend it over newer ciphers explicitly because AES has withstood the test of hundreds of cryptographers trying to break it, with representation from every nation state, over years and years.

I’m aware of newer memory protection technologies, such as CHERI, PMPs, MPUs… and as a nerd, I love thinking about these sorts of things. In fact, in my dissertation, I even advocated for the use of CHERI-style hardware capabilities and tagged pointers in new CPU architectures.

However, as a pragmatic system architect, I see no reason to eschew the MMU in favor of any of these. In fact, the MMU is composable with all of these primitives – it’s valid to have both a PMP and an MMU in the same RISC-V CPU. And, even if you’re using a CHERI-like technology for hardware-enforced bounds checking on pointers, it still doesn’t allow for transparent address space relocation. Without page-based virtual memory, each program would need to be linked to a distinct, non-overlapping region of physical address space at compile time, and you couldn’t have swap memory.

This begs the question: if the MMU is such an obvious addition, why isn’t it more prevalent? If it’s such an obvious choice, wouldn’t more players include it in their chips?

“Small” CPUs such as those found in embedded SoCs have lacked this feature since their inception. I trace this convention back to the introduction of the ARM7TDMI core in the 1990s. Back then, transistors were scarce, memory even more so, and so virtual memory was not a great product/market fit for devices with just a couple kilobytes of RAM, not even enough to hold a page table. The ARM7TDMI core’s efficiency and low cost made it a run-away success, shipping over a billion units and establishing ARM as the dominant player in the embedded SoC space.

Fast forward 30 years, and Moore’s Law has given us tens of thousands of times more capability; today, a fleck of silicon smaller than your pinky nail contains more transistors than a full-sized PC desktop from the 1990’s. Despite the progress, these small flecks of silicon continue to adhere to the pattern that was established in the 1990’s: small systems get flat memory spaces with no address isolation.

Above: Die shot of a modern 22nm system-on-chip (SoC). This fleck of silicon is about 4mm on a side and contains more transistors than a desktop PC from the 1990’s. Note how the logic region is more empty space than active gates by silicon area.

The root cause turns out explicitly to be because MMUs are so valuable: without one, you can’t run Linux, BSD, or Mach. Thus, when ARM split their IP portfolio into the A, R, and M-series cores, the low-cost M-series cores were forbidden from having an MMU to prevent price erosion of their high-end A-series cores. Instead, a proprietary hack known as the “MPU” was introduced that gives some memory security, but without an easy path to benefits such as swap memory and address space relocation.

We’ve been locked into this convention for so long that we simply forgot to challenge the assumptions.

Thanks to the rise of open architecture specifications such as RISC-V, and fully-open implementations of the RISC-V spec such as the Vexriscv, I’m not bound by anyone’s rules for what can or can’t go onto an SoC. And so, I am liberated to make the choice to include an MMU in the Baochip-1x.

This naturally empowers enthusiasts to try and run Linux on the Baochip-1x, but we (largely Sean ‘xobs’ Cross and me) already wrote a pure-Rust OS called “Xous” which incorporates an MMU but in a framework that is explicitly targeted towards small memory footprint devices like the Baochip-1x. The details of Xous are beyond the scope of this post, but if you’re interested, check out the talk we gave at 39C3.

“Now” Is Always the Right Time to Choose More Open Frameworks

This couples into the core argument as to why a “mostly open RTL” SoC is the right thing for this moment in time. As a staunch advocate for open-source technologies, I would love to see a fully-open silicon stack, from the fabs-up. I’m heartened to see multiple initiatives working on fixing this problem, but it’s a hard problem. I estimate it could take more than a decade before we have a sufficiently robust open source silicon ecosystem to market economically competitive SoCs.

For those of us looking to create an embedded product today, that leaves only one practical option: continue to use Cortex-M ARM devices, and if we want hardware memory protection, we have to tune our software to their proprietary MPU. This means further entrenching our code bases in a proprietary standard. Do I really want to spend my time porting Xous to use ARM’s proprietary flavor of memory protection? Surely not.

Thus, I would argue that we simply can’t afford to wait for fully open source PDKs to come along. Given the opportunity to do a partially-open RTL tapeout today, versus waiting for the perfect, fully-open source solution, the benefit of taping out partially-open RTL SoCs today is crystal clear to me.

A partially-open SoC available today empowers a larger community that is interested in an open source future, even if they aren’t hardware experts. As a larger community, we can begin the process of de-leveraging ARM together, so that when economically viable, “truly open” silicon alternatives come to market, they can drop directly into a mature application stack. After all, software drives demand for silicon, not the other way around.

The good news is that on the Baochip-1x, everything that can “compute” on data is available for simulation and inspection, and it’s already available on github. The parts that are closed are components such as the AXI bus framework, USB PHY, and analog components such as the PLL, voltage regulators, and I/O pads.

Thus, while certain portions of the Baochip-1x SoC are closed-source, none of them are involved in the transformation of data. In other words, all the closed source components are effectively “wires”: the data that goes in on one side should match the data coming out the other side. While this is dissatisfying from the “absolute trust” perspective – one can’t definitively rule out the possibility of back doors in black-box wires – we can inspect its perimeter and confirm that, for a broad range of possibilities, it behaves correctly. It’s not perfect transparency, but it’s far better than the fully-NDA SoCs we currently use to handle our secrets, and more importantly, it allows us to start writing code for open architectures, paving a road towards an eventually fully-open silicon-to-software future.

Hitchhiking for the Win

Those with a bit of silicon savvy would note that it’s not cheap to produce such a chip, yet, I have not raised a dollar of venture capital. I’m also not independently wealthy. So how is this possible?

The short answer is I “hitchhiked” on a 22 nm chip designed primarily by Crossbar, Inc. I was able to include a CPU of my choice, along with a few other features, in some unused free space on the chip’s floorplan. By switching off which CPU is active, you can effectively get two chips for the price of one mask set.

Above: floorplan of the Baochip, illustrating the location and relative sizes of its 5 open-source CPU cores.

For those who haven’t peeked under the hood of a System-on-Chip (SoC), the key fact to know is that the cost of modern SoCs is driven largely by peripherals and memory. The CPU itself is often just a small fraction of the area, just a couple percent in the case of the Baochip-1x. Furthermore, all peripherals are “memory mapped”: flashing an LED, for example, entails tickling some specific locations in memory.

Who does the tickling doesn’t matter – whether ARM or RISC-V CPU, or even a state machine – the peripherals respond just the same. Thus, one can effectively give the same “body” two different “personalities” by switching out their “brains” – by switching out their CPU cores, you can have the same physical piece of silicon run vastly different code bases.

The long answer starts a couple years ago, with Crossbar wanting to build a high-performance secure enclave that would differentiate itself in several ways, notably by fabricating in a relatively advanced (compared to other security chips) 22 nm process and by using their RRAM technology for non-volatile storage. RRAM is similar to FLASH memory in that it retains data without power but with faster write times and smaller (32-byte) page sizes, and it can scale below 40 nm – a limit below which FLASH has not been able to scale.

In addition to flexing their process superiority, they wanted to differentiate by being pragmatically open source about the design; security chips have been traditionally been wrapped behind NDAs, despite calls from users for transparency.

Paradoxically, open source security chips are harder to certify because the certification standards such as Common Criteria evaluates closed-source flaws as “more secure” than open-source flaws. My understanding is that the argument goes something along the lines of, “hacking chips is hard, so any barrier you can add to the up-front cost of exploiting the chip increases the effective security of the chip overall”. Basically, if the pen tester doing a security evaluation judges that a bug is easier to find and exploit if the source code is public, then, sharing the source code lowers your score. As a result, the certification scores of open source chips are likely worse than that of a closed source chip. And, since you can’t sell security chips to big customers without certifications, security chips end up being mostly closed source.

Kind of a crazy system, right? But if you consider that the people buying oodles and oodles of security chips are institutions like banks and governments, filled with non-technical managers whose primary focus is risk management, plus they are outsourcing the technical evaluation anyways – the status quo makes a little more sense. What’s a banker going to do with the source code of a chip, anyways?

Crossbar wanted to buck the trend and heed the call for open source transparency in security chips and approached me to help advise on strategy. I agreed to help them, but under one condition: that I would be allowed to add a CPU core of my own choice and sell a version of the chip under my own brand. Part of the reason was that Crossbar, for risk reduction reasons, wanted to go with a proprietary ARM CPU. Having designed chips in a prior life, I can appreciate the desire for risk reduction and going with a tape-out proven core.

However, as an open source strategy advisor, I argued that users who viewed open source as a positive feature would likely also expect, at a minimum, that the CPU would be open source. Thus I offered to add the battle-tested CPU core from the Precursor SoC – the Vexriscv – to the tapeout, and I promised I would implement the core in such a way that even if it didn’t work, we could just switch it off and there would be minimal impact on the chip’s power and area budget.

Out of this arrangement was born the Baochip-1x.

Bringing the Baochip-1x Into the Market

At the time of writing, wafers containing the Baochip-1x design have been fabricated, and hundreds of the chips have been handed out through an early sampling program. These engineering samples were all hand-screened by me.

However, that’s about to change. There’s currently a pod of wafers hustling through a fab in Hsinchu, and two of them are earmarked to become fully production-qualified Baochip-1x silicon. These will go through a fully automated screening flow. Assuming this process completes smoothly, I’ll have a few thousand Baochip-1x’s available to sell. More chips are planned for later in the year, but a combination of capital constraints, risk mitigation, and the sheer time it takes to go from blank silicon to fully assembled devices puts further inventory out until late in 2026.

If you’re eager to play with the Baochip-1x and our Rust-based OS “Xous” which uses its MMU, consider checking out the “Dabao” evaluation board. A limited number are available for pre-order today on Crowd Supply. If this were a closed-source chip, this would be akin to a sampling or preview program for launch partners – one that typically comes with NDAs and gatekeepers controlling access. With Baochip, there’s none of that. Think of the campaign as a “friends and family” offering: an opportunity for developers and hackers to play with chips at the earliest part of the mass production cycle, before the supply chain is fully ramped up and sorted out.

Name that Ware, February 2026

February 27th, 2026

This month’s Ware is shown below:

Do I sense a theme? Welcome to the tour of the various little gadgets I have littered around my desk for test & measurement!

This one is likely to be guessed pretty quickly as well, but a shout-out to Ole for introducing me to this little gem. It’s pretty impressive the amount of features & diagnostics packed into this tiny package. It’s not the cheapest tool, but a good tool – and I have to say I strongly agree with many of the product designer’s technical and aesthetic choices. The build quality is definitely up there.

I especially appreciate products that don’t default to crappy Philips drive screws – this is probably a battle that I will ultimately lose, but a hill I plan to die on: the world needs to move on and use a better drive type already. Unfortunately, without specific prompting, AIs tend to default to rendering slotted or Philips drive screws…

Winner, Name that Ware January 2026

February 27th, 2026

The Ware for January 2026 is a FNIRSI DPS-150. Tim nailed it almost immediately; congrats, email me for your prize! The DPS-150 is a small, portable DC “benchtop” power supply that converts USB-C into a range of voltages and currents. samchin convinced me to get one of these as an impulse buy in the Shenzhen markets last month. I’ll have to say that overall I’m happy with it, but the UI has been challenging for me to wrap my head around. Definitely keep the user manual for this one – I’m still referring to it to figure out all the modes.

The main technical issue I’ve had with it is that the overcurrent protection trips when powering loads that have steep changes in current draw – for example, I can’t boot a Raspberry Pi 4 off of this supply, as it thinks the point at which the CPU goes into full frequency is an overcurrent event, and shuts down as a protection response. However, it’s perfect for simulating a small battery – the current graph over time is definitely handy feedback during embedded development – and it’s the “right amount of small”.

There’s an even smaller DC supply that I own (Sinilink XY3605 – I thought I name that wared it but apparently I didn’t) but it’s so small I have to carry around a remote control to program it, which is inconvenient enough that in practice I reach for this one instead, even though it’s slightly larger. Still, when I’m at my desk, nothing beats my Envox EEZ Bench Box 3. Love that thing!

Name that Ware, January 2026

January 31st, 2026

The Ware for January 2026 is shown below:

Enjoy!

[update: added photo of top side, since the ware was already guessed – just for more enjoyment]

Winner, Name that Ware December 2025

January 31st, 2026

The Ware for December 2025 is a Spectral Instruments Series 800 camera. I was pretty shocked at how quickly this was guessed given the very small portion of the instrument that was shown, but, then again – that’s how it goes sometimes. Congrats to johslarsen for nailing this one; email me for your prize.

I had prepared a series of “hint” images in case it turned out to be too hard to guess the ware – they’re too neat not to share, so here they are:

The module above is the “other half” of the assembly – you can see the tips of the pogo pins peeking through the metal shield that press into the mating pins in the original image, shown again below for reference:

The white square in the center of the “other half” is a thermo electric cooler (TEC) stack which presses onto the lavender-colored ceramic sensor, visible through the round cut-out in the PCB above, via a spring-loaded heat pipe of some kind. The chamber containing the TEC and sensor are kept in a vacuum – the whole thing was difficult to take apart because even after a decade in storage, there was still a decent vacuum in the chamber; only after I took a mallet to it and heard the hiss of air rushing in did the whole thing pop apart.

The physical construction is a prime example of no-expense-spared engineering – a C-shape assembly made out of three PCBs, surrounding a set of plumbing that I think is for vacuum and cooling. The whole assembly seems to be engineered around the principle of getting a sensor as cold as possible without resorting to cryogenics, with little concern for power consumption, size, or cost. The actual image sensor itself is glued to a fiber optic block weighing over a kilogram that is ~10cm long. The block transmits light while serving as a thermal barrier to the sample material at ambient, or perhaps even elevated, temperatures.

This is all part of a Roche 454 DNA sequencer that I took part a while ago. There were an enormous number of fascinating bits and bobs inside the beast, but the TL;DR is it’s basically a grad student’s optical bench, complete with an optical breadboard and its array of drilled/tapped holes, that got stuck in a cosmetic case with minimal cost reduction.

Perhaps I got an early-production run unit, but also, probably only hundreds to thousands of these were ever made, which is not enough volume to work through and streamline all the production kinks on an instrument this complicated. I’m guessing that in practice, no two units were exactly alike. The camera module that was last month’s ware, however, was an “off the shelf” sub-component that was probably made in larger numbers.