Archive for the ‘chumby’ Category

Sex, Circuits & Deep House

Monday, September 28th, 2015

P9010002
Cari with the Institute Blinky Badge at Burning Man 2015. Photo credit: Nagutron.

This year for Burning Man, I built a networked light badge for my theme camp, “The Institute”. Walking in the desert at night with no light is a dangerous proposition – you can get run over by cars, bikes, or twist an ankle tripping over an errant bit of rebar sticking out of the ground. Thus, the outrageous, bordering grotesque, lighting spectacle that Burning Man becomes at night grows out of a central need for safety in the dark. While a pair of dimly flashing red LEDs should be sufficient to ensure one’s safety, anything more subtle than a Las Vegas strip billboard tends to go unnoticed by fast-moving bikers thanks to the LED arms race that has become Burning Man at night.

I wanted to make a bit of lighting that my campmates could use to stay safe – and optionally stay classy by offering a range of more subtle lighting effects. I also wanted the light patterns to be individually unique, allowing easy identification in dark, dusty nights. However, diddling with knobs and code isn’t a very social experience, and few people bring laptops to Burning Man. I wanted to come up with a way for people to craft an identity that was inherently social and interactive. In an act of shameless biomimicry, I copied nature’s most popular protocol for creating individuals – sex.

By adding a peer-to-peer radio in each badge, I was able to implement a protocol for the breeding of lighting patterns via sex.



Some examples of the unique light patterns possible through sex.

Sex

When most people think of sex, what they are actually thinking about is sexual intercourse. This is understandable, as technology allows us to have lots of sexual intercourse without actually accomplishing sexual reproduction. Still, the double-entendre of saying “Nice lights! Care to have sex?” is a playful ice breaker for new interactions between camp mates.

Sex, in this case, is used to breed the characteristics of the badge’s light pattern as defined through a virtual genome. Things like the color range, blinking rate, and saturation of the light pattern are mapped into a set of diploid (two copies of each gene) chromosomes (code) (spec). Just as in biological sex, a badge randomly picks one copy of each gene and packages them into a sperm and an egg (every badge is a hermaphrodite, much like plants). A badge’s sperm is transmitted wirelessly to another host badge, where it’s mixed with the host’s egg and a new individual blending traits of both parents is born. The new LED pattern replaces the current pattern on the egg donor’s badge.

Biological genetic traits are often analog, not digital – height or weight are not coded as discrete values in a genome. Instead, observed traits are the result of a complex blending process grounded in the minutiae of metabolic pathways and the efficacy of enzymes resulting from the DNA blueprint and environment. The manifestation of binary situations like recessive vs. dominant is often the result of a lot of gain being applied to an analog signal, thus causing the expressed trait to saturate quickly if it’s expressed at all.

In order to capture the wonderful diversity offered by sex, I implement quantitative traits in the light genome. Instead of having a single bit for each trait, it’s a byte, and there’s an expression function that combines the values from each gene (alleles) to derive a final observed trait (phenotype).

By carefully picking expression functions, I can control how the average population looks. Let’s consider saturation (I used an HSV colorspace, instead of RGB, which makes it much easier to create aesthetically pleasing color combinations). A highly saturated color is vivid and bright. A less saturated color appears pastel, until finally it’s washed out and looks just white or gray (a condition analogous to albinism).

If I want albinism to be rare, and bright colors to be common, the expression function could be a saturating add. Thus, even if one allele (copy of the gene) has a low value, the other copy just needs to be a modest value to result in a bright, vivid coloration. Albinism only occurs when both copies have a fairly low value.




Population makeup when using saturating addition to combine the maternal and paternal saturation values. Albinism – a badge light pattern looking white or gray – happens only when both maternal and paternal values are small. ‘S’ means large saturation, and ‘s’ means little saturation. ‘SS’ and ‘Ss’ pairings of genes leads to saturated colors, while only the ‘ss’ combination leads to a net low saturation (albinism).

On the other hand, if I wanted the average population to look pastel, I can simply take the average of each allele, and take that to be the saturation value. In this case, a bright color can only be achieved in both alleles have a high value. Likewise, an albino can only be achieved if both alleles have a low value.




Population makeup when using averaging to combine the maternal and paternal saturation values. The most common case is a pastel palette, with vivid colors and albinism both suppressed in the population.

For Burning Man, I chose saturating addition as the expression function, to have the population lean toward vivid colors. I implemented other features such as cyclic dimming, hue rotation, and color range using similar techniques.

It’s important when thinking about biological genes to remember that they aren’t like lines of computer code. Rather, they are like the knobs on an analog synth, and the resulting sound depends not just on the position of the knob, but where it is in the signal chain how it interacts with other effects.

Gender and Consent

Beyond genetics, there is a minefield of thorny decisions to be made when implementing the social policies and protocols around sex. What are the gender roles? And what about consent? This is where technology and society collide, making for a fascinating social experiment.

I wanted everyone to have an opportunity to play both gender roles, so I made the badges hermaphroditic, in the sense that everyone can give or receive genetic material. The “maternal” role receives sperm, combines it with an egg derived from the currently displayed light pattern, and replaces its light pattern with a new hybrid of both. The “paternal” role can transmit a sperm derived from the currently displayed pattern. Each badge has the requisite ports to play both roles, and thus everyone can play the role of male or female simply by being either the originator of or responder to a sex request.

This leads us to the question of consent. One fundamental flaw in the biological implementation of sex is the possibility of rape: operating the hardware doesn’t require mutual consent. I find the idea of rape disgusting, even if it’s virtual, so rape is disallowed in my implementation. In other words, it’s impossible for a paternal badge to force a sperm into a maternal badge: male roles are not allowed to have sex without first being asked by a female role. Instead, the person playing the female role must first initiate sex with a target mate. Conversely, female roles can’t steal sperm from male roles; sperm is only generated after explicit consent from the male. Assuming consent is given, a sperm is transmitted to the maternal badge and the protocol is complete. This two-way handshake assures mutual consent.

This non-intuitive and partially role-reversed implementation of sex lead to users asking support questions akin to “I’m trying to have sex, but why am I constantly being denied?” and my response was – well, did you ask your potential mate if it was okay to have sex first? Ah! Consent. The very important but often overlooked step before sex. It’s a socially awkward question, but with some practice it really does become more natural and easy to ask.

Some users were enthusiastic early adopters of explicit consent, while others were less comfortable with the question. It was interesting to see the ways straight men would ask other straight men for sex – they would ask for “ahem, blinky sex” – and anecdotally women seemed more comfortable and natural asking to have sex (regardless of the gender of the target user).

As an additional social experiment, I introduced a “rare” trait (pegged at ~3% of a randomly generated population) consisting of a single bright white pixel that cycles around the LED ring. I wanted to see if campmates would take note and breed for the rare trait simply because it’s rare. At the end of the week, more people were expressing the rare phenotype than at the beginning, so presumably some selective breeding for the trait did happen.

In the end, I felt that having sex to breed interesting light patterns was a lot more fun for everyone than tweaking knobs and sliders in a UI. Also, because traits are inherited through sexual reproduction, by the end of the event one started to see families of badges gaining similar traits, but thanks to the randomness inherent in sex you could still tell individuals apart in the dark by their light patterns.

Finding Friends

Implementing sex requires a peer-to-peer radio. So why not also use the radio to help people locate nearby friends? Seems like a good idea on the outside, but the design of this system is a careful balance between creating a general awareness of friends in the area vs. creating a messaging client.

Personally, one of the big draws of going to Burning Man is the ability to unplug from the Internet and live in an environment of intimate immediacy – if you’re physically present, you get 100% of my attention; otherwise, all bets are off. Email, SMS, IRC, and other media for interaction (at least, I hear there are others, but I don’t use them…) are great for networking and facilitating business, but they detract from focusing on the here and now. For me there’s something ironic about seeing a couple in a fancy restaurant, both hopelessly lost staring deeply into their smartphones instead of each other’s eyes. Being able to set an auto-responder for two weeks which states that your email will never be read is pretty liberating, and allows me to open my mind up to trains of thought that can take days to complete. Thus, I really wanted to avoid turning the badge into a chat client, or any sort of communication medium that sets any expectation of reading messages and responding in a timely fashion.

On the other hand, meeting up with friends at Burning Man is terribly hard. It’s life before the cell phone – if you’re old enough to remember that. Without a cell phone, you have a choice between enjoying the music, stalking around the venue to find friends, or dancing in one spot all night long so you’re findable. Simply knowing if my friends have finally showed up is a big help; if they haven’t arrived yet, I can get lost in the music and check out the sound in various parts of the venue until they arrive.

Thus, I designed a very simple protocol which will only reveal if your friends are nearby, and nothing else. Every badge emits a broadcast ping every couple of seconds. Ideally, I’d use an RSSI (receive signal strength indicator) to figure out how far the ping is, but due to a quirk of the radio hardware I was unable to get a reliable RSSI reading. Instead, every badge would listen for the pings, and decrement the ping count at a slightly slower average rate than the ping broadcast. Thus, badges solidly within radio range would run up a ping count, and as people got farther and farther away, the ping count would decrease as pings gradually get lost in the noise.


Friend finding UI in action. In this case, three other badges are nearby, SpacyRedPhage, hap, and happybunnie:-). SpacyRedPhage is well within range of the radio, and the other two are farther away.

The system worked surprisingly well. The reliable range of the radio worked out to be about 200m in practice, which is about the sound field of a major venue at Burning Man. It was very handy for figuring out if my friends had left already for the night, or if they were still prepping at camp; and there was one memorable reunion at sunrise where a group of my camp mates drove our beloved art car, Dr. Brainlove, to Robot Heart and I was able to quickly find them thanks to my badge registering a massive amount of pings as they drove into range.

Hardware Details

I’m not so lucky that I get to design such a complex piece of hardware exclusively for a pursuit as whimsical as Burning Man. Rather, this badge is a proof-of concept of a larger effort to develop a new open-source platform for networked embedded computers (please don’t call it IoT) backed by a rapid deployment supply chain. Our codename for the platform is Orchard.

The Burning Man badge was our first end-to-end test of Orchard’s “supply chain as a service” concept. The core reference platform is fairly well-documented here, and as you can see looks nothing like the final badge.


Bottom: orchard reference design; top: orchard variant as customized for Burning Man.

However, the only difference at a schematic level between the reference platform and the badge is the addition of 14 extra RGB LEDs, the removal of the BLE radio, and redesign of the captouch electrode pattern. Because the BOM of the badge is a strict subset of the reference design, we were able to go from a couple prototypes in advance of a private Crowd Supply campaign to 85 units delivered at the door of camp mates in about 2.5 months – and the latency of shipping units from China to front doors in the US accounts for one full month of that time.




The badge sports an interactive captouch surface, an OLED display, 900MHz ISM band peer-to-peer radio, microphone, accelerometer, and more!

If you’re curious, you can view documentation about the Orchard platform here, and discuss it at the Kosagi forum.

Reflection

As an engineer, my “default” existence is confined on four sides by cost, schedule, quality, and specs, with a sprinkling of legal, tax, and regulatory constraints on top. It’s pretty easy to lose your creative spark when every day is spent threading the needle of profit and loss.

Even though the implementation of Burning Man’s principles of decommodification and gifting is far from perfect, it’s sufficient to enable me to loosen the shackles of my daily existence and play with technology as a medium for enhancing human interactions, and not simply as a means for profit. In other words, thanks to the values of the community, I’m empowered and supported to build stuff that wouldn’t make sense for corporate shareholders, but might improve the experiences of my closest friends. I think this ability to leave daily existence behind for a couple weeks is important for staying balanced and maintaining perspective, because at least for me maximizing profit is rarely the same as maximizing happiness. After all, a warm smile and a heartfelt hug is priceless.

A Tale of Two Zippers

Monday, February 9th, 2015

Recently, Akiba took me to visit his friend’s zipper factory. I love visiting factories: no matter how simple the product, I learn something new.

This factory is a highly-automated, vertically-integrated manufacturer. To give you an idea of what that means, they take this:


Ingots of 93% zinc, 7% aluminum alloy; approx 1 ton shown

and this:

Compressed sawdust pellets, used to fuel the ingot smelter

and this:

Rice, used to feed the workers

And turn it into this:

Finished puller+slider assemblies

In between the input material and the output product is a fully automated die casting line, a set of tumblers and vibrating pots to release and polish the zippers, and a set of machines to de-burr and join the puller to the slider. I think I counted less than a dozen employees in the facility, and I’m guessing their capacity well exceeds a million zippers a month.

I find vibrapots mesmerizing. I actually don’t know if that’s what they are called — I just call them that (I figure within minutes of this going up, a comment will appear informing me of their proper name). The video below shows these miracles at work. It looks as if the sliders and pullers are lining themselves up in the right orientation by magic, falling into a rail, and being pressed together into that familiar zipper form, in a single fully automated machine.


720p version

If you put your hand in the pot, you’ll find there’s no stirrer to cause the motion that you see; you’ll just feel a strong vibration. If you relax your hand, you’ll find it starting to move along with all the other items in the pot. The entire pot is vibrating in a biased fashion, such that the items inside tend to move in a circular motion. This pushes them onto a set of rails which are shaped to take advantage of asymmetries in the object to allow only the objects that happen to jump on the rail in the correct orientation through to the next stage.

Despite the high level of automation in this factory, many of the workers I saw were performing this one operation:


720p version

This begs the question of why is it that some zippers have fully automated assembly procesess, whereas others are semi-automatic?

The answer, it turns out, is very subtle, and it boils down to this:

I’ve added red arrows to highlight the key difference between the zippers. This tiny tab, barely visible, is the difference between full automation and a human having to join millions of sliders and pullers together. To understand why, let’s review one critical step in the vibrapot operation.

We paused the vibrapot responsible for sorting the pullers into the correct orientation for the fully automatic process, so I could take a photo of the key step:

As you can see, when the pullers come around the rail, their orientation is random: some are facing right, some facing left. But the joining operation must only insert the slider into the smaller of the two holes. The tiny tab, highlighted above, allows gravity to cause all the pullers to hang in the same direction as they fall into a rail toward the left.

The semi-automated zipper design doesn’t have this tab; as a result, the design is too symmetric for a vibrapot to align the puller. I asked the factory owner if adding the tiny tab would save this labor, and he said absolutely.

At this point, it seems blindingly obvious to me that all zippers should have this tiny tab, but the zipper’s designer wouldn’t have it. Even though the tab is very small, a user can feel the subtle bumps, and it’s perceived as a defect in the design. As a result, the designer insists upon a perfectly smooth tab which accordingly has no feature to easily and reliably allow for automatic orientation.

I’d like to imagine that most people, after watching a person join pullers to sliders for a couple minutes, will be quite alright to suffer the tiny bump on the tip of their zipper to save another human the fate of having to manually align pullers into sliders for 8 hours a day. I suppose alternately, an engineer could spend countless hours trying to design a more complex method for aligning the pullers and sliders, but (a) the zipper’s customer probably wouldn’t pay for that effort and (b) it’s probably net cheaper to pay unskilled labor to manually perform the sorting. They’ve already automated everything else in this factory, so I figure they’ve thought long and hard about this problem, too. My guess is that robots are expensive to build and maintain; people are self-replicating and largely self-maintaining. Remember that third input to the factory, “rice”? Any robot’s spare parts have to be cheaper than rice to earn a place on this factory’s floor.

However, in reality, it’s by far too much effort to explain this to end customers; and in fact quite the opposite happens in the market. Because of the extra labor involved in putting these together, the zippers cost more; therefore they tend to end up in high-end products. This further enforces the notion that really smooth zippers with no tiny tab on them must be the result of quality control and attention to detail.

My world is full of small frustrations similar to this. For example, most customers perceive plastics with a mirror-finish to be of a higher quality than those with a satin finish. While functionally there is no difference in the plastic’s structural performance, it takes a lot more effort to make something with a mirror-finish. The injection molding tools must be painstakingly and meticulously polished, and at every step in the factory, workers must wear white gloves; mountains of plastic are scrapped for hairline defects, and extra films of plastic are placed over mirror surfaces to protect them during shipping.

For all that effort, for all that waste, what’s the first thing a user does? Put their dirty fingerprints all over the mirror finish. Within a minute of coming out of the box, all that effort is undone. Or worse yet, they leave the protective film on, resulting in a net worse cosmetic effect than a satin finish. Contrast this to a satin finish. Satin finishes don’t require protective films, are easier to handle, last longer, and have much better yields. In the user’s hands, they hide small scratches, fingerprints, and bits of dust. Arguably, the satin finish offers a better long-term customer experience than the mirror finish.

But that mirror finish sure does look pretty in photographs and showroom displays!

From Gongkai to Open Source

Monday, December 29th, 2014

About a year and a half ago, I wrote about a $12 “Gongkai” cell phone (pictured above) that I stumbled across in the markets of Shenzhen, China. My most striking impression was that Chinese entrepreneurs had relatively unfettered access to cutting-edge technology, enabling start-ups to innovate while bootstrapping. Meanwhile, Western entrepreneurs often find themselves trapped in a spiderweb of IP frameworks, spending more money on lawyers than on tooling. Further investigation taught me that the Chinese have a parallel system of traditions and ethics around sharing IP, which lead me to coin the term “gongkai”. This is deliberately not the Chinese word for “Open Source”, because that word (kaiyuan) refers to openness in a Western-style IP framework, which this not. Gongkai is more a reference to the fact that copyrighted documents, sometimes labeled “confidential” and “proprietary”, are made known to the public and shared overtly, but not necessarily according to the letter of the law. However, this copying isn’t a one-way flow of value, as it would be in the case of copied movies or music. Rather, these documents are the knowledge base needed to build a phone using the copyright owner’s chips, and as such, this sharing of documents helps to promote the sales of their chips. There is ultimately, if you will, a quid-pro-quo between the copyright holders and the copiers.

This fuzzy, gray relationship between companies and entrepreneurs is just one manifestation of a much broader cultural gap between the East and the West. The West has a “broadcast” view of IP and ownership: good ideas and innovation are credited to a clearly specified set of authors or inventors, and society pays them a royalty for their initiative and good works. China has a “network” view of IP and ownership: the far-sight necessary to create good ideas and innovations is attained by standing on the shoulders of others, and as such there is a network of people who trade these ideas as favors among each other. In a system with such a loose attitude toward IP, sharing with the network is necessary as tomorrow it could be your friend standing on your shoulders, and you’ll be looking to them for favors. This is unlike the West, where rule of law enables IP to be amassed over a long period of time, creating impenetrable monopoly positions. It’s good for the guys on top, but tough for the upstarts.

This brings us to the situation we have today: Apple and Google are building amazing phones of outstanding quality, and start-ups can only hope to build an appcessory for their ecosystem. I’ve reviewed business plans of over a hundred hardware startups by now, and most of them are using overpriced chipsets built using antiquated process technologies as their foundation. I’m no exception to this rule – we use the Freescale i.MX6 for Novena, which is neither the cheapest nor the fastest chip on the market, but it is the one chip where anyone can freely download almost complete documentation and anyone can buy it on Digikey. This parallel constraint of scarce documentation and scarce supply for cutting edge technology forces Western hardware entrepreneurs to look primarily at Arduino, Beaglebone and Raspberry Pi as starting points for their good ideas.


Above: Every object pictured is a phone. Inset: detail of the “Skeleton” novelty phone. Image credits: Halfdan, Rachel Kalmar

Chinese entrepreneurs, on the other hand, churn out new phones at an almost alarming pace. Phone models change on a seasonal basis. Entrepreneurs experiment all the time, integrating whacky features into phones, such as cigarette lighters, extra-large battery packs (that can be used to charge another phone), huge buttons (for the visually impaired), reduced buttons (to give to children as emergency-call phones), watch form factors, and so forth. This is enabled because very small teams of engineers can obtain complete design packages for working phones – case, board, and firmware – allowing them to fork the design and focus only on the pieces they really care about.

As a hardware engineer, I want that. I want to be able to fork existing cell phone designs. I want to be able to use a 364 MHz 32-bit microcontroller with megabytes of integrated RAM and dozens of peripherals costing $3 in single quantities, instead of a 16 MHz 8-bit microcontroller with a few kilobytes of RAM and a smattering of peripherals costing $6 in single quantities. Unfortunately, queries into getting a Western-licensed EDK for the chips used in the Chinese phones were met with a cold shoulder – our volumes are too small, or we have to enter minimum purchase agreements backed by hundreds of thousands of dollars in a cash deposit; and even then, these EDKs don’t include all the reference material the Chinese get to play with. The datasheets are incomplete and as a result you’re forced to use their proprietary OS ports. It feels like a case of the nice guys finishing last. Can we find a way to still get ahead, yet still play nice?

We did some research into the legal frameworks and challenges around absorbing Gongkai IP into the Western ecosystem, and we believe we’ve found a path to repatriate some of the IP from Gongkai into proper Open Source. However, I must interject with a standard disclaimer: we’re not lawyers, so we’ll tell you our beliefs but don’t construe them as legal advice. Our intention is to exercise our right to reverse engineer in a careful, educated fashion to increase the likelihood that, if push comes to shove, the courts will agree with our actions. However, we also feel that shying away from reverse engineering simply because it’s controversial is a slippery slope: you must exercise your rights to have them. If women didn’t vote and black people sat in the back of the bus because they were afraid of controversy, the US would still be segregated and without universal suffrage.

Sometimes, you just have to stand up and assert your rights.

There are two broad categories of issues we have to deal with, patents and copyrights. For patents, the issues are complex, yet it seems the most practical approach is to essentially punt on the issue. This is what the majority of the open source community does, and in fact many corporations have similar policies at the engineering level. Nobody, as far as we know, checks their Linux commits for patent infringement before upstreaming them. Why? Among other reasons, it takes a huge amount of resources to determine which patents apply, and if one could be infringing; and even after expending those resources, one cannot be 100% sure. Furthermore, if one becomes very familiar with the body of patents, it amplifies the possibility that an infringement, should it be found, is willful and thus triple damages. Finally, it’s not even clear where the liability lies, particularly in an open source context. Thus, we do our best not to infringe, but cannot be 100% sure that no one will allege infringement. However, we do apply a license to our work which has a “poison pill” clause for patent holders that do attempt to litigate.

For copyrights, the issue is also extremely complex. The EFF’s Coders’ Rights Project has a Reverse Engineering FAQ that’s a good read if you really want to dig into the issues. The tl;dr is that courts have found that reverse engineering to understand the ideas embedded in code and to achieve interoperability is fair use. As a result, we have the right to study the Gongkai-style IP, understand it, and produce a new work to which we can apply a Western-style Open IP license. Also, none of the files or binaries were encrypted or had access controlled by any technological measure – no circumvention, no DMCA problem.

Furthermore, all the files were obtained from searches linking to public servers – so no CFAA problem, and none of the devices we used in the work came with shrink-wraps, click-throughs, or other end-user license agreements, terms of use, or other agreements that could waive our rights.

Thus empowered by our fair use rights, we decided to embark on a journey to reverse engineer the Mediatek MT6260. It’s a 364 MHz, ARM7EJ-S, backed by 8MiB of RAM and dozens of peripherals, from the routine I2C, SPI, PWM and UART to tantalizing extras like an LCD + touchscreen controller, audio codec with speaker amplifier, battery charger, USB, Bluetooth, and of course, GSM. The gray market prices it around $3/unit in single quantities. You do have to read or speak Chinese to get it, and supply has been a bit spotty lately due to high Q4 demand, but we’re hoping the market will open up a bit as things slow down for Chinese New Year.

For a chip of such complexity, we don’t expect our two-man team to be able to unravel its entirety working on it as a part-time hobby project over the period of a year. Rather, we’d be happy if we got enough functionality so that the next time we reach for an ATMega or STM32, we’d also seriously consider the MT6260 as an alternative. Thus, we set out as our goal to port NuttX, a BSD-licensed RTOS, to the chip, and to create a solid framework for incrementally porting drivers for the various peripherals into NuttX. Accompanying this code base would be original hardware schematics, libraries and board layouts that are licensed using CC BY-SA-3.0 plus an Apache 2.0 rider for patent issues.

And thus, the Fernvale project was born.

Fernvale Hardware

Compared to the firmware, the hardware reverse engineering task was fairly straightforward. The documents we could scavenge gave us a notion of the ball-out for the chip, and the naming scheme for the pins was sufficiently descriptive that I could apply common sense and experience to guess the correct method for connecting the chip. For areas that were ambiguous, we had some stripped down phones I could buzz out with a multimeter or stare at under a microscope to determine connectivity; and in the worst case I could also probe a live phone with an oscilloscope just to make sure my understanding was correct.

The more difficult question was how to architect the hardware. We weren’t gunning to build a phone – rather, we wanted to build something a bit closer to the Spark Core, a generic SoM that can be used in various IoT-type applications. In fact, our original renderings and pin-outs were designed to be compatible with the Spark ecosystem of hardware extensions, until we realized there were just too many interesting peripherals in the MT6260 to fit into such a small footprint.


Above: early sketches of the Fernvale hardware

We settled eventually upon a single-sided core PCB that we call the “Fernvale Frond” which embeds the microUSB, microSD, battery, camera, speaker, and Bluetooth functionality (as well as the obligatory buttons and LED). It’s slim, at 3.5mm thick, and at 57x35mm it’s also on the small side. We included holes to mount a partial set of pin headers, spaced to be compatible with an Arduino, although it can only be plugged into 3.3V-compatible Arduino devices.


Above: actual implementation of Fernvale, pictured with Arduino for size reference

The remaining peripherals are broken out to a pair of connectors. One connector is dedicated to GSM-related signals; the other to UI-related peripherals. Splitting GSM into a module with many choices for the RF front end is important, because it makes GSM a bona-fide user-installed feature, thus pushing the regulatory and emissions issue down to the user level. Also, splitting the UI-related features out to another board costs down the core module, so it can fit into numerous scenarios without locking users into a particular LCD or button arrangement.


Above: Fernvale system diagram, showing the features of each of the three boards


Fernvale Frond mainboard


Fernvale blade UI breakout


Fernvale spore AFE dev board

All the hardware source documents can be downloaded from our wiki.

As an interesting side-note, I had some X-rays taken of the MT6260. We did this to help us identify fake components, just in case we encountered units being sold as empty epoxy blocks, or as remarked versions of other chips (the MT6260 has variants, such as the -DA and the -A, the difference being how much on-chip FLASH is included).


X-ray of the MT6260 chip. A sharp eye can pick out the outline of multiple ICs among the wirebonds. Image credit: Nadya Peek

To our surprise, this $3 chip didn’t contain a single IC, but rather, it’s a set of at least 4 chips, possibly 5, integrated into a single multi-chip module (MCM) containing hundreds of wire bonds. I remember back when the Pentium Pro’s dual-die package came out. That sparked arguments over yielded costs of MCMs versus using a single bigger die; generally, multi-chip modules were considered exotic and expensive. I also remember at the time, Krste Asanović, then a professor at the MIT AI Lab now at Berkeley, told me that the future wouldn’t be system on a chip, but rather “system mostly on a chip”. The root of his claim is that the economics of adding in mask layers to merge DRAM, FLASH, Analog, RF, and Digital into a single process wasn’t favorable, and instead it would be cheaper and easier to bond multiple die together into a single package. It’s a race between the yield and cost impact (both per-unit and NRE) of adding more process steps in the semiconductor fab, vs. the yield impact (and relative reworkability and lower NRE cost) of assembling modules. Single-chip SoCs was the zeitgeist at the time (and still kind of is), so it’s interesting to see a significant datapoint validating Krste’s insight.

Reversing the Boot Structure

The amount of documentation made available to Shanzhai engineers in China seems to be just enough to enable them to assemble a phone and customize its UI, but not enough to do a full OS port. You eventually come to recognize that all the phones based on a particular chipset have the same backdoor codes, and often times the UI is inconsistent with the implemented hardware. For example, the $12 phone mentioned at the top of the post will prompt you to plug headphones into the headphone jack for the FM radio to work, yet there is no headphone jack provided in the hardware. In order to make Fernvale accessible to engineers in the West, we had to reconstruct everything from scratch, from the toolchain, to the firmware flashing tool, to the OS, to the applications. Given that all the Chinese phone implementations simply rely upon Mediatek’s proprietary toolchain, we had to do some reverse engineering work to figure out the boot process and firmware upload protocol.

My first step is always to dump the ROM, if possible. We found exactly one phone model which featured an external ROM that we could desolder (it uses the -D ROMless variant of the chip), and we read its contents using a conventional ROM reader. The good news is that we saw very little ciphertext in the ROM; the bad news is there’s a lot of compressed data. Below is a page from our notes after doing a static analysis on the ROM image.

0x0000_0000		media signature “SF_BOOT”
0x0000_0200		bootloader signature “BRLYT”, “BBBB”
0x0000_0800		sector header 1 (“MMM.8”)
0x0000_09BC		reset vector table
0x0000_0A10		start of ARM32 instructions – stage 1 bootloader?
0x0000_3400		sector header 2 (“MMM.8”) – stage 2 bootloader?
0x0000_A518		thunk table of some type
0x0000_B704		end of code (padding until next sector)
0x0001_0000		sector header 3( “MMM.8”) – kernel?
0x0001_0368		jump table + runtime setup (stack, etc.)
0x0001_0828		ARM thumb code start – possibly also baseband code
0x0007_2F04		code end
0x0007_2F05 – 0x0009_F0005	padding “DFFF”
0x0009_F006		code section begin “Accelerated Technology / ATI / Nucleus PLUS”
0x000A_2C1A		code section end; pad with zeros
0x000A_328C		region of compressed/unknown data begin
0x007E_E200		modified FAT partition #1
0x007E_F400		modified FAT partition #2

One concern about reverse engineering SoCs is that they have an internal boot ROM that is always run before code is loaded from an external device. This internal ROM can also have signature and security checks that prevent tampering with the external code, and so to determine the effort level required we wanted to quickly figure out how much code was running inside the CPU before jumping to external boot code. This task was made super-quick, done in a couple hours, using a Tek MDO4104B-6. It has the uncanny ability to take deep, high-resolution analog traces and do post-capture analysis as digital data. For example, we could simply probe around while cycling power until we saw something that looked like RS-232, and then run a post-capture analysis to extract any ASCII text that could be coded in the analog traces. Likewise, we could capture SPI traces and the oscilloscope could extract ROM access patterns through a similar method. By looking at the timing of text emissions versus SPI ROM address patterns, we were able to quickly determine that if the internal boot ROM did any verification, it was minimal and nothing approaching the computational complexity of RSA.


Above: Screenshot from the Tek MDO4104B-6, showing the analog trace in yellow, and the ASCII data extracted in cyan. The top quarter shows a zoomed-out view of the entire capture; one can clearly see how SPI ROM accesses in gray are punctuated with console output in cyan.

From here, we needed to speed up our measure-modify-test loop; desoldering the ROM, sticking it in a burner, and resoldering it onto the board was going to get old really fast. Given that we had previously implemented a NAND FLASH ROMulator on Novena, it made sense to re-use that code base and implement a SPI ROMulator. We hacked up a GPBB board and its corresponding FPGA code, and implemented the ability to swap between the original boot SPI ROM and a dual-ported 64kiB emulator region that is also memory-mapped into the Novena Linux host’s address space.


Block diagram of the SPI ROMulator FPGA


There’s a phone in my Novena! What’s that doing there?

A combination of these tools – the address stream determined by the Tek oscilloscope, rapid ROM patching by the ROMulator, and static code analysis using IDA (we found a SHA-1 implementation) – enabled us to determine that the initial bootloader, which we refer to as the 1bl, was hash-checked using a SHA-1 appendix.

Building a Beachhead

The next step was to create a small interactive shell which we could use as a beachhead for running experiments on the target hardware. Xobs created a compact REPL environment called Fernly which supports commands like peeking and poking to memory, and dumping CPU registers.

Because we designed the ROMulator to make the emulated ROM appear as a 64k memory-mapped window on a Linux host, it enables the use a variety of POSIX abstractions, such as mmap(), open() (via /dev/mem), read() and write(), to access the emulated ROM. xobs used these abstractions to create an I/O target for radare2. The I/O target automatically updates the SHA-1 hash every time we made changes in the 1bl code space, enabling us to do cute things like interactively patch and disassemble code within the emulated ROM space.

We also wired up the power switch of the phone to an FPGA I/O, so we could write automated scripts that toggle the power on the phone while updating the ROM contents, allowing us to do automated fuzzing of unknown hardware blocks.

Attaching a Debugger

Because of the difficulty in trying to locate critical blocks, and because JTAG is multiplexed with critical functions on the target device, an unconventional approach was taken to attach a debugger: xobs emulates the ARM core, and uses his fernly shell to reflect virtual loads and stores to the live target. This allows us to attach a remote debugger to the emulated core, bypassing the need for JTAG and allowing us to use cross-platform tools such as IDA on x86 for the reversing UI.

At the heart of this technique is Qemu, a multi-platform system emulator. It supports emulating ARM targets, specifically the ARMv5 used in the target device. A new machine type was created called “fernvale” that implements part of the observed hardware on the target, and simply passes unknown memory accesses directly to the device.

The Fernly shell was stripped down to only support three commands: write, read, and zero-memory. The write command pokes a byte, word, or dword into RAM on the live target, and a read command reads a byte, word, or dword from the live target. The zero-memory command is an optimization, as the operating system writes large quantities of zeroes across a large memory area.

In addition, the serial port registers are hooked and emulated, allowing a host system to display serial data as if it were printed on the target device. Finally, SPI, IRAM, and PSRAM are all emulated as they would appear on the real device. Other areas of memory are either trapped and funneled to the actual device, or are left unmapped and are reported as errors by Qemu.


The diagram above illustrates the architecture of the debugger.

Invoking the debugger is a multi-stage process. First, the actual MT6260 target is primed with the Fernly shell environment. Then, the Qemu virtual ARM CPU is “booted” using the original vendor image – or rather, primed with a known register state at a convenient point in the boot process. At this point, code execution proceeds on the virtual machine until a load or store is performed to an unknown address. Virtual machine execution is paused while a query is sent to the real MT6260 via the Fernly shell interface, and the load or store is executed on the real machine. The results of this load or store is then relayed to the virtual machine and execution is resumed. Of course, Fernly will crash if a store happens to land somewhere inside its memory footprint. Thus, we had to hide the Fernly shell code in a region of IRAM that’s trapped and emulated, so loads and stores don’t overwrite the shell code. Running Fernly directly out of the SPI ROM also doesn’t work as part of the initialization routine of the vendor binary modifies SPI ROM timings, causing SPI emulation to fail.

Emulating the target CPU allows us to attach a remote debugger (such as IDA) via GDB over TCP without needing to bother with JTAG. The debugger has complete control over the emulated CPU, and can access its emulated RAM. Furthermore, due to the architecture of qemu, if the debugger attempts to access any memory-mapped IO that is redirected to the real target, the debugger will be able to display live values in memory. In this way, the real target hardware is mostly idle, and is left running in the Fernly shell, while the virtual CPU performs all the work. The tight integration of this package with IDA-over-GDB also allows us to very quickly and dynamically execute subroutines and functions to confirm their purpose.

Below is an example of the output of the hybrid Qemu/live-target debug harness. You can see the trapped serial writes appearing on the console, plus a log of the writes and reads executed by the emulated ARM CPU, as they are relayed to the live target running the reduced Fernly shell.

bunnie@bunnie-novena-laptop:~/code/fernvale-qemu$ ./run.sh 

~~~ Welcome to MTK Bootloader V005 (since 2005) ~~~
**===================================================**

READ WORD Fernvale Live 0xa0010328 = 0x0000... ok
WRITE WORD Fernvale Live 0xa0010328 = 0x0800... ok
READ WORD Fernvale Live 0xa0010230 = 0x0001... ok
WRITE WORD Fernvale Live 0xa0010230 = 0x0001... ok
READ DWORD Fernvale Live 0xa0020c80 = 0x11111011... ok
WRITE DWORD Fernvale Live 0xa0020c80 = 0x11111011... ok
READ DWORD Fernvale Live 0xa0020c90 = 0x11111111... ok
WRITE DWORD Fernvale Live 0xa0020c90 = 0x11111111... ok
READ WORD Fernvale Live 0xa0020b10 = 0x3f34... ok
WRITE WORD Fernvale Live 0xa0020b10 = 0x3f34... ok

From this beachhead, we were able to discover the offsets of a few IP blocks that were re-used from previous known Mediatek chips (such as the MT6235 in the osmocomBB http://bb.osmocom.org/trac/wiki/MT6235) by searching for their “signature”. The signature ranged from things as simple as the power-on default register values, to changes in bit patterns due to the side effects of bit set/clear registers located at offsets within the IP block’s address space. Using this technique, we were able to find the register offsets of several peripherals.

Booting an OS

From here we were able to progress rapidly on many fronts, but our goal of a port of NuttX remained elusive because there was no documentation on the interrupt controller within the canon of Shanzhai datasheets. Although we were able to find the routines that installed the interrupt handlers through static analysis of the binaries, we were unable to determine the address offsets of the interrupt controller itself.

At this point, we had to open the Mediatek codebase and refer to the include file that contained the register offsets and bit definitions of the interrupt controller. We believe this is acceptable because facts are not copyrightable. Justice O’Connor wrote in Feist v. Rural (449 U.S. 340, 345, 349 (1991). See also Sony Computer Entm’t v. Connectix Corp., 203 F. 3d 596, 606 (9th Cir. 2000); Sega Enterprises Ltd. v. Accolade, Inc., 977 F.2d 1510, 1522-23 (9th Cir. 1992)) that

“Common sense tells us that 100 uncopyrightable facts do not magically change their status when gathered together in one place. … The key to resolving the tension lies in understanding why facts are not copyrightable: The sine qua non of copyright is originality”

and

“Notwithstanding a valid copyright, a subsequent compiler remains free to use the facts contained in another’s publication to aid in preparing a competing work, so long as the competing work does not feature the same selection and arrangement”.

And so here, we must tread carefully: we must extract facts, and express them in our own selection and arrangement. Just as the facts that “John Doe’s phone number is 555-1212” and “John Doe’s address is 10 Main St.” is not copyrightable, we need to extract facts such as “The interrupt controller’s base address in 0xA0060000”, and “Bit 1 controls status reporting of the LCD” from the include files, and re-express them in our own header files.

The situation is further complicated by blocks for which we have absolutely no documentation, not even an explanation of what the registers mean or how the blocks function. For these blocks, we reduce their initialization into a list of address and data pairs, and express this in a custom scripting language called “scriptic”. We invented our own language to avoid subconscious plagiarism – it is too easy to read one piece of code and, from memory, code something almost exactly the same. By transforming the code into a new language, we’re forced to consider the facts presented and express them in an original arrangement.

Scriptic is basically a set of assembler macros, and the syntax is very simple. Here is an example of a scriptic script:

#include "scriptic.h"
#include "fernvale-pll.h"

sc_new "set_plls", 1, 0, 0

  sc_write16 0, 0, PLL_CTRL_CON2
  sc_write16 0, 0, PLL_CTRL_CON3
  sc_write16 0, 0, PLL_CTRL_CON0
  sc_usleep 1

  sc_write16 1, 1, PLL_CTRL_UPLL_CON0
  sc_write16 0x1840, 0, PLL_CTRL_EPLL_CON0
  sc_write16 0x100, 0x100, PLL_CTRL_EPLL_CON1
  sc_write16 1, 0, PLL_CTRL_MDDS_CON0
  sc_write16 1, 1, PLL_CTRL_MPLL_CON0
  sc_usleep 1

  sc_write16 1, 0, PLL_CTRL_EDDS_CON0
  sc_write16 1, 1, PLL_CTRL_EPLL_CON0
  sc_usleep 1

  sc_write16 0x4000, 0x4000, PLL_CTRL_CLK_CONDB
  sc_usleep 1

  sc_write32 0x8048, 0, PLL_CTRL_CLK_CONDC
  /* Run the SPI clock at 104 MHz */
  sc_write32 0xd002, 0, PLL_CTRL_CLK_CONDH
  sc_write32 0xb6a0, 0, PLL_CTRL_CLK_CONDC
  sc_end

This script initializes the PLL on the MT6260. To contrast, here’s the first few lines of the code snippet from which this was derived:

// enable HW mode TOPSM control and clock CG of PLL control 

*PLL_PLL_CON2 = 0x0000; // 0xA0170048, bit 12, 10 and 8 set to 0 to enable TOPSM control 
                        // bit 4, 2 and 0 set to 0 to enable clock CG of PLL control
*PLL_PLL_CON3 = 0x0000; // 0xA017004C, bit 12 set to 0 to enable TOPSM control

// enable delay control 
*PLL_PLLTD_CON0= 0x0000; //0x A0170700, bit 0 set to 0 to enable delay control

//wait for 3us for TOPSM and delay (HW) control signal stable
for(i = 0 ; i < loop_1us*3 ; i++);

//enable and reset UPLL
reg_val = *PLL_UPLL_CON0;
reg_val |= 0x0001;
*PLL_UPLL_CON0  = reg_val; // 0xA0170140, bit 0 set to 1 to enable UPLL and generate reset of UPLL

The original code actually goes on for pages and pages, and even this snippet is surrounded by conditional statements which we culled as they were not relevant facts to initializing the PLL correctly.

With this tool added to our armory, we were finally able to code sufficient functionality to boot NuttX on our own Fernvale hardware.

Toolchain

Requiring users to own a Novena ROMulator to hack on Fernvale isn't a scalable solution, and thus in order to round out the story, we had to create a complete developer toolchain. Fortunately, the compiler is fairly cut-and-dry – there are many compilers that support ARM as a target, including clang and gcc. However, flashing tools for the MT6260 are much more tricky, as all the existing ones that we know of are proprietary Windows programs, and Osmocom's loader doesn't support the protocol version required by the MT6260. Thus, we had to reverse engineer the Mediatek flashing protocol and write our own open-source tool.

Fortunately, a blank, unfused MT6260 shows up as /dev/ttyUSB0 when you plug it into a Linux host – in other words, it shows up as an emulated serial device over USB. This at least takes care of the lower-level details of sending and receiving bytes to the device, leaving us with the task of reverse engineering the protocol layer. xobs located the internal boot ROM of the MT6260 and performed static code analysis, which provided a lot of insight into the protocol. He also did some static analysis on Mediatek's Flashing tool and captured live traces using a USB protocol analyzer to clarify the remaining details. Below is a summary of the commands he extracted, as used in our open version of the USB flashing tool.

enum mtk_commands {
  mtk_cmd_old_write16 = 0xa1,
  mtk_cmd_old_read16 = 0xa2,
  mtk_checksum16 = 0xa4,
  mtk_remap_before_jump_to_da = 0xa7,
  mtk_jump_to_da = 0xa8,
  mtk_send_da = 0xad,
  mtk_jump_to_maui = 0xb7,
  mtk_get_version = 0xb8,
  mtk_close_usb_and_reset = 0xb9,
  mtk_cmd_new_read16 = 0xd0,
  mtk_cmd_new_read32 = 0xd1,
  mtk_cmd_new_write16 = 0xd2,
  mtk_cmd_new_write32 = 0xd4,
  // mtk_jump_to_da = 0xd5,
  mtk_jump_to_bl = 0xd6,
  mtk_get_sec_conf = 0xd8,
  mtk_send_cert = 0xe0,
  mtk_get_me = 0xe1, /* Responds with 22 bytes */
  mtk_send_auth = 0xe2,
  mtk_sla_flow = 0xe3,
  mtk_send_root_cert = 0xe5,
  mtk_do_security = 0xfe,
  mtk_firmware_version = 0xff,
};

Current Status and Summary

After about a year of on-and-off effort between work on the Novena and Chibitronics campaigns, we were able to boot a port of NuttX on the MT6260. A minimal set of hardware peripherals are currently supported; it’s enough for us to roughly reproduce the functionality of an AVR used in an Arduino-like context, but not much more. We’ve presented our results this year at 31C3 (slides).

The story takes an unexpected twist right around the time we were writing our CFP proposal for 31C3. The week before submission, we became aware that Mediatek released the LinkIT ONE, based on the MT2502A, in conjunction with Seeed Studios. The LinkIT ONE is directly aimed at providing an Internet of Things platform to entrepreneurs and individuals. It’s integrated into the Arduino framework, featuring an open API that enables the full functionality of the chip, including GSM functions. However, the core OS that boots on the MT2502A in the LinkIT ONE is still the proprietary Nucleus OS and one cannot gain direct access to the hardware; they must go through the API calls provided by the Arduino shim.

Realistically, it’s going to be a while before we can port a reasonable fraction of the MT6260’s features into the open source domain, and it’s quite possible we will never be able to do a blob-free implementation of the GSM call functions, as those are controlled by a DSP unit that’s even more obscure and undocumented. Thus, given the robust functionality of the LinkIT ONE compared to Fernvale, we’ve decided to leave it as an open question to the open source community as to whether or not there is value in continuing the effort to reverse engineer the MT6260: How important is it, in practice, to have a blob-free firmware?

Regardless of the answer, we released Fernvale because we think it’s imperative to exercise our fair use rights to reverse engineer and create interoperable, open source solutions. Rights tend to atrophy and get squeezed out by competing interests if they are not vigorously exercised; for decades engineers have sat on the sidelines and seen ever more expansive patent and copyright laws shrink their latitude to learn freely and to innovate. I am saddened that the formative tinkering I did as a child is no longer a legal option for the next generation of engineers. The rise of the Shanzhai and their amazing capabilities is a wake-up call. I see it as evidence that a permissive IP environment spurs innovation, especially at the grass-roots level. If more engineers become aware of their fair use rights, and exercise them vigorously and deliberately, perhaps this can catalyze a larger and much-needed reform of the patent and copyright system.

Want to read more? Check out xobs’ post on Fernvale. Want to get involved? Chime in at our forums. Or, watch the recording of our talk below.

Team Kosagi would like to once again extend a special thanks to .mudge for making this research possible.

Maker Pro: Soylent Supply Chain

Thursday, December 18th, 2014

A few editors have approached me about writing a book on manufacturing, but that’s a bit like asking an architect to take a photo of a building that’s still on the drawing board. The story is still unfolding; I feel as if I’m still fumbling in the dark trying to find my glasses. So, when Maker Media approached me to write a chapter for their upcoming “Maker Pro” book, I thought perhaps this was a good opportunity to make a small and manageable contribution.

The Maker Pro book is a compendium of vignettes written by 17 Makers, and you can pre-order the Maker Pro book at Amazon now.

Maker Media was kind enough to accommodate my request to license my contribution using CC BY-SA-3.0. As a result, I can share my chapter with you here. I titled it the “Soylent Supply Chain” and it’s about the importance of people and relationships when making physical goods.


Soylent Supply Chain

The convenience of modern retail and ecommerce belies the complexity of supply chains. With a few swipes on a tablet, consumers can purchase almost any household item and have it delivered the next day, without facing another human. Slick marketing videos of robots picking and packing components and CNCs milling components with robotic precision create the impression that everything behind the retail front is also just as easy as a few search queries, or a few well-worded emails. This notion is reinforced for engineers who primarily work in the domain of code; system engineers can download and build their universe from source–the FreeBSD system even implements a command known as ‘make buildworld’, which does exactly that.

The fiction of a highly automated world moving and manipulating atoms into products is pervasive. When introducing hardware startups to supply chains in practice, almost all of them remark on how much manual labor goes into supply chains. Only the very highest volume products and select portions of the supply chain are well-automated, a reality which causes many to ask me, “Can’t we do something to relieve all these laborers from such menial duty?” As menial as these duties may seem, in reality, the simplest tasks for humans are incredibly challenging for a robot. Any child can dig into a mixed box of toys and pick out a red 2×1 Lego brick, but to date, no robot exists that can perform this task as quickly or as flexibly as a human. For example, the KIVA Systems mobile-robotic fulfillment system for warehouse automation still requires humans to pick items out of self-moving shelves, and FANUC pick/pack/pal robots can deal with arbitrarily oriented goods, but only when they are homogeneous and laid out flat. The challenge of reaching into a box of random parts and producing the correct one, while being programmed via a simple voice command, is a topic of cutting-edge research.


bunnie working with a factory team. Photo credit: Andrew Huang.

The inverse of the situation is also true. A new hardware product that can be readily produced through fully automated mechanisms is, by definition, less novel than something which relies on processes not already in the canon of fully automated production processes. A laser-printed sheet will always seem more pedestrian than a piece of offset-printed, debossed, and metal-film transferred card stock. The mechanical engineering details of hardware are particularly refractory when it comes to automation; even tasks as simple as specifying colors still rely on the use of printed Pantone registries, not to mention specifying subtleties such as textures, surface finishes, and the hand-feel of buttons and knobs. Of course, any product’s production can be highly automated, but it requires a huge investment and thus must ship in volumes of millions per month to amortize the R&D cost of creating the automated assembly line.

Thus, supply chains are often made less of machines, and more of people. Because humans are an essential part of a supply chain, hardware makers looking to do something new and interesting oftentimes find that the biggest roadblock to their success isn’t money, machines, or material: it’s finding the right partners and people to implement their vision. Despite the advent of the Internet and robots, the supply chain experience is much farther away from Amazon.com or Target than most people would assume; it’s much closer to an open-air bazaar with thousands of vendors and no fixed prices, and in such situations getting the best price or quality for an item means building strong personal relationships with a network of vendors. When I first started out in hardware, I was ill-equipped to operate in the open-market paradigm. I grew up in a sheltered part of Midwest America, and I had always shopped at stores that had labeled prices. I was unfamiliar with bargaining. So, going to the electronics markets in Shenzhen was not only a learning experience for me technically, it also taught me a lot about negotiation and dealing with culturally different vendors. While it’s true that a lot of the goods in the market are rubbish, it’s much better to fail and learn on negotiations over a bag of LEDs for a hobby project, rather than to fail and learn on negotiations on contracts for manufacturing a core product.


One of bunnie’s projects is Novena, an open source laptop. Photo credit: Crowd Supply.

This point is often lost upon hardware startups. Very often I’m asked if it’s really necessary to go to Asia–why not just operate out of the US? Aren’t emails and conference calls good enough, or worst case, “can we hire an agent” who manages everything for us? I guess this is possible, but would you hire an agent to shop for dinner or buy clothes for you? The acquisition of material goods from markets is more than a matter of picking items from the shelf and putting them in a basket, even in developed countries with orderly markets and consumer protection laws. Judgment is required at all stages — when buying milk, perhaps you would sort through the bottles to pick the one with greatest shelf life, whereas an agent would simply grab the first bottle in sight. When buying clothes, you’ll check for fit, loose strings, and also observe other styles, trends, and discounted merchandise available on the shelf to optimize the value of your purchase. An agent operating on specific instructions will at best get you exactly what you want, but you’ll miss out better deals simply because you don’t know about them. At the end of the day, the freshness of milk or the fashion and fit of your clothes are minor details, but when producing at scale even the smallest detail is multiplied thousands, if not millions of times over.

More significant than the loss of operational intelligence, is the loss of a personal relationship with your supply chain when you surrender management to an agent or manage via emails and conference calls alone. To some extent, working with a factory is like being a houseguest. If you clean up after yourself, offer to help with the dishes, and fix things that are broken, you’ll always be welcome and receive better service the next time you stay. If you can get beyond the superficial rituals of politeness and create a deep and mutually beneficial relationship with your factory, the value to your business goes beyond money–intangibles such as punctuality, quality, and service are priceless.

I like to tell hardware startups that if the only value you can bring to a factory is money, you’re basically worthless to them–and even if you’re flush with cash from a round of financing, the factory knows as well as you do that your cash pool is finite. I’ve had folks in startups complain to me that in their previous experience at say, Apple, they would get a certain level of service, so how come we can’t get the same? The difference is that Apple has a hundred billion dollars in cash, and can pay for five-star service; their bank balance and solid sales revenue is all the top-tier contract manufacturers need to see in order to engage.


Circuit Stickers, adhesive-backed electronic components, is another of bunnie’s projects. Photo credit: Andrew “bunnie” Huang.

On the other hand, hardware startups have to hitchhike and couch-surf their way to success. As a result, it’s strongly recommended to find ways other than money to bring value to your partners, even if it’s as simple as a pleasant demeanor and an earnest smile. The same is true in any service industry, such as dining. If you can afford to eat at a three-star Michelin restaurant, you’ll always have fairy godmother service, but you’ll also have a $1,000 tab at the end of the meal. The local greasy spoon may only set you back ten bucks, but in order to get good service it helps to treat the wait staff respectfully, perhaps come at off-peak hours, and leave a good tip. Over time, the wait staff will come to recognize you and give you priority service.

At the end of the day, a supply chain is made out of people, and people aren’t always rational and sometimes make mistakes. However, people can also be inspired and taught, and will work tirelessly to achieve the goals and dreams they earnestly believe in: happiness is more than money, and happiness is something that everyone wants. For management, it’s important to sell your product to the factory, to get them to believe in your vision. For engineers, it’s important to value their effort and respect their skills; I’ve solved more difficult problems through camaraderie over beers than through PowerPoint in conference rooms. For rank-and-file workers, we try our best to design the product to minimize tedious steps, and we spend a substantial amount of effort making the tools we provide them for production and testing to be fun and engaging. Where we can’t do this, we add visual and audio cues that allow the worker to safely zone out while long and boring processes run. The secret to running an efficient hardware supply chain on a budget isn’t just knowing the cost of everything and issuing punctual and precise commands, but also understanding the people behind it and effectively reading their personalities, rewarding them with the incentives they actually desire, and guiding them to improve when they make mistakes. Your supply chain isn’t just a vendor; they are an extension of your own company.

Overall, I’ve found that 99% of the people I encounter in my supply chain are fundamentally good at heart, and have an earnest desire to do the right thing; most problems are not a result of malice, but rather incompetence, miscommunication, or cultural misalignment. Very significantly, people often live up to the expectations you place on them. If you expect them to be bad actors, even if they don’t start out that way, they have no incentive to be good if they are already paying the price of being bad — might as well commit the crime if you know you’ve been automatically judged as guilty with no recourse for innocence. Likewise, if you expect people to be good, oftentimes they will rise up and perform better simply because they don’t want to disappoint you, or more importantly, themselves. There is the 1% who are truly bad actors, and by nature they try to position themselves at the most inconvenient road blocks to your progress, but it’s important to remember that not everyone is out to get you. If you can gather a syndicate of friends large enough, even the bad actors can only do so much to harm you, because bad actors still rely upon the help of others to achieve their ends. When things go wrong your first instinct should not be “they’re screwing me, how do I screw them more,” but should be “how can we work together to improve the situation?”

In the end, building hardware is a fundamentally social exercise. Generally, most interesting and unique processes aren’t automated, and as such, you have to work with other people to develop bespoke processes and products. Furthermore, physical things are inevitably owned or operated upon by other people, and understanding how to motivate and compel them will make a difference in not only your bottom line, but also in your schedule, quality, and service level. Until we can all have Tony Stark’s JARVIS robot to intelligently and automatically handle hardware fabrication, any person contemplating manufacturing hardware at scale needs to understand not only circuits and mechanics, but also how to inspire and effectively command a network of suppliers and laborers.

After all, “it’s people — supply chains are made out of people!”

Novena Update

Saturday, October 4th, 2014

It’s been four months since we finished Novena’s crowd funding campaign, and we’ve made a lot of progress since then. Since then, a team of people have been hard at work to make Novena a reality.

It takes many hands to build a product of this complexity, and we couldn’t do it without our dedicated and hard-working team at AQS. Above is a photo from the conference room where we did the T1 plastics review in Dongguan, China.

In this update, we’ll be discussing progress on the Casing, Electronics, Accessories, Firmware and the Community.


Case construction update
We’re very excited that the Novena cases we’re carrying around are now made of entirely production-process hardware — no more prototypes. A total of 10 injection molding tools, many of the family molds, have been opened so far; for comparison, a product like NeTV or chumby had perhaps 3-4 tools.

For those not familiar with injection molding, it’s a process whereby plastic is molded into a net shape from hot, high pressure liquid plastic forced into a cavity made out of hardened steel. The steel tool is a masterpiece of engineering in itself – it’s a water-cooled block weighing in at about a ton, capable of handling pressures found at the bottom of the Mariana Trench, and the internal surfaces are machined to tolerances better than the width of a human hair. And on top of that, it contains a clockwork of moving pieces, with dozens of ejector pins, sliders, lifters and parting surfaces coming apart and back together again smoothly over thousands of cycles. It’s amazing that these tools can be crafted in a couple of months, yet here we are.

With so much complexity involved, it’s no surprise that the tools require several iterations of refinement to get absolutely perfect. In tooling jargon, the iterations are referred to as T0, T1, T2…etc. You’re doing pretty good if you can go to full production at T2; we’re currently at T1 stage. The T1 plastics are 99% there, with a few issues relating to flow and knit lines, as well as a couple of spots where the plastic is warping during cooling or binding to the tool during ejection and causing some deformation. This manifests itself in a couple spots where the seams aren’t as tight as we’d like them to be in the case.

Most people have only seen products of finished tooling, so I thought I’d share what a pretty typical T0 shot looks like, particularly for a large and complex tool like the Novena case base part. Test shots like this are typically done in colors that highlight defects and/or the resin is available as scrap, hence the gray color. The final units will be black.

There’s a lot going on with this piece of plastic, so below is a visual guide to some of the artifacts.

In the green boxes are a set of “sink marks”. These happen when the opposite side of the plastic has a particularly thin or thick feature. These areas will cool faster or slower than the bulk of the plastic, causing these regions to pucker slightly and cause what looks like a bit of a shadow. It’s particularly noticeable on mirror-finish parts. In this case, the sink marks are due to the plastic underneath the nut bosses of the Peek array being much thinner than the surrounding plastic. The fix to this problem was to slightly thicken that region, reducing the overall internal clearance of the case by 0.8mm. Fortunately, I had designed in a little extra clearance margin to the case so this was possible.

The red arrow points to a “knit line”. This is a region where plastic flow meets within the tool. Plastic, as it is injected into the cavity, will tend to flow from one or more gates, and where the molten plastic meets itself, it will leave a hairline scar. It’s often located at points of symmetry between the gates where the plastic is injected (on this tool, there are four gates located underneath the spot where the rubber feet go — gates are considered cosmetically unattractive and thus they are strategically placed to hide their location).

The white feathery artifacts, as indicated by the orange arrow, are flow marks. In this case, it seems plastic was cooling a bit too quickly within the tool, causing these streaks. This problem can often be fixed by adjusting the injection pressure, cycle length, and temperature. This tweaking is done using test shots on the molding machine, with one parameter at a time tweaked, shot after shot, until its optimum point is found. This process can sometimes take hundreds of shots, creating a small hill of scrap plastic as a by-product.

Most of these gross defects were fixed by T1, and the plastic now looks much closer to production-grade (and the color is now black). Below is the T1 shot in initial testing after transferring live hardware into the plastics.

There’s still a few issues around fit and finish. The rear lip is binding to the tool slightly during ejection, which is causing a little bit of deformation. Also, the panel we added at the last minute to accommodate oversized expansion boards isn’t mating as tightly as we’d like it to. But, despite all of these issues, the case feels much more solid than the prototypes, and the gas piston mechanism is finally consistent and really smooth.

Front bezel update
The front bezel of Novena’s case (not to be confused with the aluminum LCD bezel) has gone through a couple of changes since the campaign. When we closed funding, it had two outward-facing USB ports and one switch. Now, it has two switches and one outward-facing USB port and one inward-facing USB port.

One switch is for power — it goes directly to the power board and thus can be used to turn the system on and off even when the main board is fully powered down.

The other switch is wired to a user key press, and the intent is to facilitate Bluetooth association for keyboards that are being stupid. It seems some keyboards can take up to a half-minute to cycle through — something (presumably, it’s trying to be secure) — before they connect. There are hacks you can do to bypass that, but it requires you to run a script on the host, and the idea is by pressing this button users can trigger a convenience script to get past the utter folly of Bluetooth. This switch also doubles as a wake-up button for when the system is in suspend.

As for the USB ports, there are still four ports total in the design, but the configuration is now as follows:

  • Two higher-current capable ports on the right
  • One standard-current capable port on the front
  • One standard-current capable port facing toward the Peek Array
  • In other words, we face one USB port toward the inside of the machine; since half the fun of Novena is modding the hardware, we figure making a USB port available on the inside is at least as useful as making it available on the outside.

    For those who don’t do hardware mods, it’s also a fine place to plug small dongles that you generally keep permanently attached, such as a radio transceiver for your keyboard. It’s a little inconvenient to initially plug in the dongle, but keeping the radio transceiver dongle facing the inside helps protect it from damage when you throw your laptop into your travel bag.

    Speakers
    We toyed with several iterations of speaker selection for Novena. One of the core ideas behind the design was to make speaker choice something every user would be encouraged to make on their own. One driving reason for this is some people really listen to music on their laptop when they travel, but others simply rely upon the speaker for notification tones and would prefer to use headphones for media capabilities.

    Physics dictates that high-quality sound requires a certain amount of space and mass, and so users who have a more relaxed fidelity requirement should be able to reclaim the space and weight that nicer speakers would require.

    Kurt Mottweiler, the designer of the Heirloom model, had selected a nice but very compact off-the-shelf speaker, the PUI ASE06008MR-LW150-R, for the Heirloom. We evaluated that in the context of the standard Novena model and found that it fit well into the Peek Array and it also had acceptable fidelity, particularly for its size. And so, we adopted this as the standard offering for audio. However, it will be provided with a mounting kit that allows for easy removal so users who need to reclaim the space they take, or who want to go the other way and put in larger speakers, can do so with ease.


    PVT2 Mainboard
    The Novena mainboard went through a minor revision prior to mass production. The 21-point change list can be viewed here; the majority of the changes focused on replacing or updating components that were at risk of EOL. The two most significant changes from a design standpoint were the addition of an internal FPC header to connect to the front bezel cluster, and a dedicated hardware RTC module.

    The internal FPC header was added to improve the routing of signals from the mainboard to the front bezel cluster. We had to run two USB ports, plus a smattering of GPIOs and power to the front bezel and the original scheme required multiple cables to execute the connection. The updated design condenses all of this into a single FPC, thereby simplifying the design and improving reliability.

    A dedicated hardware RTC module was added because we couldn’t get the RTC built into the i.MX6 to perform well. It seems that the CPU simply had a higher leakage on the RTC than reported in the datasheet, and thus the lifetime of the RTC when the system was turned off was measured in, at best, minutes. We made the call that there was too much risk in continuing to develop with the on-board RTC and opted to include an external, dedicated RTC module that we knew would work. In order to increase compatibility with other i.MX6 platforms, we picked the same module used by the Solid-Run Hummingboard, the NXP PCF8523T/1.

    GPBB
    The GPBB got a face-lift and a couple of small mods to make it more hacker-friendly.

    I think everything looks a little bit nicer in matte black, so where it doesn’t compromise production integrity we opted to use a matte black soldermask with gold finish.

    Beyond the obvious cosmetic change, the GPBB also features an adjustable I/O voltage for the digital outputs. The design change is still going through testing, but the concept is to by default allow a 5V/3.3V selectable setting in software. However, the lower voltage can also be adjusted to 2.5V and 1.8V by changing a single resistor (R12), which I also labelled “I/O VOLTAGE SET” and made a 1206 part so soldering novices can make the change themselves.

    In our experience, we’re finding an ever-increasing gulf between the voltage standards used by hobbyists and what we’re actually finding inside equipment we need to reverse engineer; and thus, to accommodate both applications a flexible voltage output selection mechanism was added to the GPBB.

    Desktop Passthrough
    The desktop case originally included just the Novena mainboard, and the front panel breakout. It turns out this makes power management awkward, as the overall power management system for the case was designed with the assumption there is a helper microcontroller managing a master cut-off switch.

    Complexity is the devil, and it’s been hard enough to get the software going for even a single configuration. So in net we found it would be cheaper to introduce a new piece of hardware rather than deal with multiple code configurations.

    Therefore, desktop systems are now getting a power pass-through board as part of the offering. It’s a simple PCBA that contains just the STM32 controller and power switch of the full Senoko board. This allows us to use a consistent gross power management architecture across both the desktop and the laptop systems.

    Of course, this is swatting a fly with a sledgehammer, but this sledgehammer costs as much as the flyswatter and it’s inconvenient to carry both a fly swatter and a sledgehammer around. And so yes, we’re using a 32-bit ARM CPU to read the state of a pushbutton and flip a GPIO, and yes, this is all done using a full multi-threaded real time operating system (ChibiOS) running underneath it. It feels a little silly, which is why we broke out some of the unused GPIOs so there’s a chance some clever user might find an application for all that untapped power.


    Battery
    The battery pack for Novena is and will continue to be a wildcard in the stack. It’s our first time building a system with such a high-capacity battery, and working through all the shipping regulations to get these delivered to your front door will be a challenge.

    Some countries are particularly difficult in terms of their regulations around the importation of lithium batteries. In the worst case, we’ll send your laptop with no battery inside, and we will ship separately, at our cost, an off-the-shelf battery pack from a vendor that specializes in RC battery packs (e.g. Hobby King). You will have the same battery we featured in the crowd funding campaign, but you’ll need to plug it in yourself. We consider this to be a safe fall-back solution, since Hobby King ships thousands of battery packs a day all around the world.

    However, this did not stop us from developing a custom battery pack. As it’s very difficult to maintain a standing stock of battery packs (they need to be periodically conditioned), we’re including this custom battery pack only to backers of the campaign, providing their country of residence allows its import (and we won’t know for sure until we try). We did get UN38.3 certification for the custom battery pack, which in theory allows it to be shipped by air freight, but regulations around this are in flux. It seems countries and carriers keep on inventing new rules, particularly with all the paranoia about the potential use of lithium batteries as incendiary devices, and we don’t have the resources to keep up with the zeitgeist.

    For those who live in countries that allow the importation of our custom pack, the new pack features a 5000mAh rated capacity (about 2x the capacity over the pack we featured in the crowd campaign, which had 3000mAh printed on the outside but actually delivered about 2500mAh in practice). In real-life testing, the custom pack is getting about 6-7 hours of runtime with minimal power management enabled. Also, since I got to specify the battery, I know this one has the correct protection circuitry built into it, and I know the provenance of its cells and so I have a little more confidence in its long-term performance and stability.

    Of course, it’s a whole different matter convincing the lawmakers, customs authorities, and regulatory authorities of those facts…but fear not, even if they won’t accept this custom limited-edition battery, you will still get the original off-the-shelf pack promised in the campaign.

    Hard Drive
    In the campaign, we referenced providing 240GiB Intel 530 (or equivalent) and 480GiB Intel 720 drives for the laptop and heirloom models, respectively. We left the spec slightly ambiguous because the SSD market moves quickly, and probably the best drive last February when we drew up the spec will be different from the best drive we could get in October, when we actually do the purchasing.

    After doing some research, it’s our belief that the best equivalent drives today are the 240GiB Samsung 840 EVO (for the laptop model) and the 512GiB Samsung 850 Pro (for the Heirloom). We’ve been personally using the 840 EVO in our units for several months now, and they have performed admirably. An important metric for us is how well the drives hold up under unexpected power outages — this happens fairly often, for example, when you’re doing development work on the power management subsystem. Some hard drives, such as the SanDisk Extreme II, fail quite reliably (how’s that for an oxymoron) after a few unexpected power-down cycles. We’ve also had bad luck with OCZ and Crucial drives in the past.

    Intel drives have generally been pretty good, except that Intel stopped doing their own controllers for the 520 and 530 series and instead started using SandForce controllers, which in my opinion removes any potential advantage they could have being both the maker of the memory chips and the maker of the controller. The details of how flash memory performs, degrades, and yields are extremely process-specific, and at least in my fantasy world a company that produces flash + controller combinations should have an advantage over companies that have to mix-and-match multiple flash types with a semi-generic controller. Furthermore, while the Intel 720 does use their home-grown controller solution, it’s a power hog (over 5W active power) and requires a 12V rail, and is thus not suitable for use in laptop environments.

    The 840 EVO series comes with a reasonable 3-year warranty and at it’s held up well against one site’s write endurance test. After using mine for several months, I’ve had no complaints about it, and I think it’s a solid every-day use drive for firmware development. We also have a web server that hosts most of the media content for this and a couple other blogs, wikis, and bug tracking tools, and it’s a Novena running off an 840 EVO.

    For the premium Heirloom users, we’re very excited to build in the 850 PRO series. This drive comes with a serious warranty that matches the “heirloom” name — 10 years. The reason behind their ability to offer such a high claim of reliability is even more remarkable. The drive uses a technology that Samsung has branded “V-NAND”, which I consider to be the first bona-fide production-grade 3D transistor technology. Intel claims they make 3D transistors, but that’s just marketing hype — yes, the gate region has a raised surface topology, but you still only get a single layer of devices. From a design standpoint you’re still working with a 2D graph of devices. It’s like calling Braille a revolutionary 3D printing technology. The should have stuck with what I consider to be the “original” (and more descriptive/less misleading) name, FinFET, because by calling these 3D transistors I don’t know what they’re going to call actual 3D arrays of transistors, if they ever get around to making them.

    Chipworks did an excellent initial analysis of Samsung’s V-NAND technology and you can see from this SEM image they published that V-NAND isn’t about stacking just a couple transistors, Samsung is shipping a full-on 38-layer sandwich:

    This isn’t some lame Intel-style bra-padding exercise. This is full-on process technology bad-assery at its finest. This is Neo decoding the Matrix. This is Mal shooting first. It’s a stack of almost 40 individual, active transistors in a single spot. It’s a game changer, and it’s not vapor ware. Heirloom backers will get a laptop with over 4 trillion of these transistors packed inside, and it will be awesome.

    Sorry, I get excited about these kinds of things.


    Firmware
    From the software side, we’re working on finalizing the kernel, bootloader, and distro selection, as well as deciding what you’ll see when you first power on Novena.

    Marek Vasut is working on getting Novena supported in mainline U-Boot, which involves a surprising number of patches. Few ARM boards support as much RAM as Novena, so some support patches were needed first. Full support is in progress, including USB and video.

    We intend to ship with a mainline kernel, but interestingly Jon Nettleson has a 3.14 long-term-support kernel that is a hybrid of Freescale’s chip-specific patches combined with many backported upstream patches. Users may be interested in using this kernel over the upstream one, which has better support for thermal events and for power management.

    While we prefer to go with an upstream kernel, and to get our changes pushed into mainline, other users might find this kernel’s interesting blend of community and vendor code to satisfy their needs better.

    The kernel that we’ll use has most of the important parts upstreamed, including the audio chip which should be part of the 3.17 kernel. We’re still carrying a few local patches for various reasons ranging from specialized hacks to experimental features, or features that are not yet ready to push upstream, or rely on other features that are not yet upstream.

    For example, the display system on a laptop is very different from what is usually found on an ARM device, and we have local patches to fix this up. In most ARM devices, the screen is fixed during boot and it isn’t possible to hot-swap displays at runtime. Novena supports two different displays at once, and allows you to plug in an HDMI monitor without needing to reboot.

    Speaking of displays, the community has been hard at work on an accelerated 2D Xorg DDX driver. 2D acceleration is important, because most of the time users are interacting with the desktop, and 2D hardware uses significantly less power than 3D hardware. On a desktop machine, the 3D chip is used to composite the desktop. On Novena, which doesn’t have a fan and a small overall active power footprint, saving power is very important. By taking advantage of the 2D-only hardware, we save power while having a smoother experience. There are a few bugs that remain with the 2D driver, but it should be ready by the time we ship.

    There is a 3D driver that is in progress as well. It’s able to run Quake 3 on the framebuffer, but still has to be integrated into an OpenGL ES driver before it works under X.

    We’ve also been working on getting a root filesystem setup. This includes deciding which packages are installed, and customizing the list of software repositories. We want to add a repository for our kernel and bootloader, as well as for various packages which haven’t made it upstream such as an imx6 version of irqbalance. This will allow us to provide you with updated kernels as we add more support.

    Finally, the question remains of what you’ll see when you first power it up. In Linux, it’s not at all common to have a first-boot setup screen where you create your user, set the time, and configure the network. That’s common in Windows and OS X, which come preinstalled, but under Linux that’s generally taken care of by the installer. As we mull the topic, we’re torn between creating a good desktop-style experience vs. making a practical embedded developer’s experience. A desktop-style experience would ship a blank-slate and prompt the user to create an account via a locally attached keyboard and monitor; however, embedded developers may never plug a monitor into their device, and instead prefer to connect via console or ssh, thereby requiring a default username, password and hostname. Either way, we want to create just a single firmware common across all platforms, and so special-casing releases to a particular target is the least desired solution. If you have an opinion, please share it in our user forum.


    Community
    We’re pleased to see that even before shipping, we have a few alpha developers who continue to be very active. In addition to Jon Nettleton (gfx), Russell King (also gfx), and Marek Vasut (u-boot), we have a couple of other alpha user’s efforts we’d like to highlight in this update.

    MyriadRF continues to move forward with their SDR solution for Novena. About three weeks ago they sent us pre-production boards, and they are looking good. We’ve placed a binding order for their boards, and things look on track to get them into our shop by November, in time for integration with the first desktop units we’ll be shipping. MyriadRF is working on a fun demo for their hardware, but I’ll save that story for them to tell :)

    The CrypTech group has also been developing applications with the help of Novena. The CrypTech project is developing a BSD / CC BY-SA 3.0 licensed reference design and prototype examples of a Hardware Security Module. Their hope is to create a widely reviewed, designed-for-crypto device that anyone can compose for their application and easily build with their own trusted supply chain. They are using Novena to prototype elements of their design.

    The expansion board highlighted above is a prototype noise source based on avalanche noise from the transistor that can be seen on the middle of the board. CrypTech uses that noise to generate entropy in the FPGA. The entropy is then combined with entropy generated by ring oscillators in the FPGA and mixed using e.g. SHA-512 to generate seeds. The seeds are then used to initialize the ChaCha stream cipher, ultimately resulting in a stream of cryptographically sound random values. The result is a high performance, state-of-the art random number generator coprocessor. This of course represents just a first draft; since the implementation is done in an FPGA, the CrypTech team will continue to evolve their methodology and experiment with alternative methods to generate a robust stream of random numbers.

    Thanks to the CrypTech team for sharing a sneak-peek of their baby!

    Looking Forward

    From our current progress, it seems we’re still largely on track to release an initial shipment of bare boards to early backers in late November, and have an initial shipment of desktop units ready to go by late December. We’ll be shipping the units in tranches, so some backers will receive units before others.

    Our shipping algorithm is roughly a combination of how early someone backed the campaign, modified by which region of the world you’re in. As every country has different customs issues, we will probably ship just one or two items to each unique country first to uncover any customs or regulatory problems, before attempting to ship in bulk. This means backers outside the United States (where Crowd Supply’s fulfillment center is located) will be receiving their units a bit later than those within the US.

    And as a final note, if there’s one thing we’ve learned in the hardware business, is that you can’t count your chickens before they’ve hatched. Good progress to date doesn’t mean we’ve got an easy path to finished units. We still have a lot of hills to climb and rivers to cross, but at least for now we seem to be on track.

    Thanks again to all of our Novena backers, we’re looking forward to getting hardware into your hands soon!

    -bunnie & xobs