pingswept.org
now with web 1.0 again

April 29, 2010

Liquid metal batteries

I've recently started a contract at work for Professor Donald Sadoway's lab at MIT working on liquid metal batteries. I can't describe the details of the project I'm working on, but the research going on in the lab is quite interesting. The idea is to solve the problem of energy storage that accompanies all of our promising renewable energy solutions, like wind and solar. When the wind stops blowing or a cloud obscures the sun, the electrical grid still needs to supply energy to the world. If you only have a small amount of renewable power attached to your grid, you can just ignore the problem, but around 20% penetration, you get into trouble. Our best solution right now is either firing up natural gas turbines to cover the peak loads or pumping water up a hill when we have extra capacity so it can run down through a generator when we need it back.

What we really need are massive, cheap, efficient batteries. The idea in the Sadoway lab is to make something like an aluminum refinery, but instead of just sinking huge amounts of electricity to extract aluminum, you set up a reversible reaction so you could get the electricity back later.

Go take a look at this awesome, enormous picture of one of these furnaces in an aluminum refinery so you know what I'm talking about. Look at the size of the guy in the picture, and then look at how many furnace chambers there are in the row. That's an industrial scale operation.

To make aluminum, you dig bauxite out of the ground and use heat and sodium hydroxide to extract the part that's aluminum. What you get out, unfortunately, is oxidized aluminum, known as alumina. This is because aluminum, in its elemental form, reacts with oxygen, and when it sits in the earth for eternity, there's plenty of air seeping around, so all of the aluminum bonds with oxygen.

Fortunately, we have electrochemistry on our side. The large smelting furnaces in the picture you looked at a moment ago are long steel troughs that are lined with carbon and filled with aluminum oxide. These form the two electrodes in a chemical reaction. When electricity is run through the carbon into the aluminum oxide, the oxygen releases from the aluminum and bonds to the carbon, creating carbon dioxide, which is then vented to the atmosphere to help keep the planet warm. During the reaction, the aluminum oxide in the center heats up and liquefies, while the outer crust remains solid, sort of like the liquid-filled gum of the 70's, Chewels. (You may also recall Freshen-up, "the gum that goes squirt.")

To turn this process into a battery, we need an electrode that doesn't turn into a gas, and we'd like both electrodes to be cheap and lightweight, relative to the amount of energy they can store. Sadoway's lab started with one magnesium electrode and one antimony electrode, with a salt electrolyte in between. (They have since moved on to better combinations that I'm not at liberty to describe.) If you heat the core of the battery up to 700 C, everything becomes liquid, and the resistance drops substantially. Most remarkably, the three materials separate by density — electrode, electrolyte, electrode — all in a stack.

What's so great about a liquid metal battery? They have several advantages, notably extremely low internal resistance and huge current capability. Aluminum refineries run at currents above 100,000 amps. For comparison, most household circuitbreakers blow at 15 amps. The low resistance of liquid metals means that the battery is likely to charge and discharge very efficiently.

At first, the fact that the battery needs to run at high temperature seems like a major disadvantage — if you have to dump a lot of energy into heating, that makes the battery less efficient. This is true, but what's not obvious is what happens to a furnace's thermal behavior as it grows in size. In general, hot objects cool off in proportion to their surface area, which grows proportional to the square of the size of an object, roughly speaking. The capacity of a battery, however, grows with its volume, which is proportional to the cube of its size. This means that as the battery becomes huge, the amount of heat loss per unit of capacity decreases, i.e. the volume overwhelms the surface area. It's this same property that allowed icehouses in pre-industrial times to store ice well into the summer. There's some hope that at the right scale, with the right insulation, the small inefficiency in charging and discharging the battery will suffice to keep the core in the molten state.

So that's what I'm working on recently. (I'm still working on a Linux board on the side, but it's kind of on the back burner for the next month or so.)

April 12, 2010

Designing embedded systems with web frameworks

I have a prediction.

We're about to see a shift in embedded systems development. Around 2008 or 2009, embedded microprocessors like the one in your cellphone, reached a threshold where they can perform as decent webservers without special tuning. Even with a slow database query or some inefficient templates, they've got the speed to handle real web serving. This means that suddenly the most efficient development pattern for embedded systems is not the proprietary hardware and software tools that have dominated the industry for the last 30 years, but the open source web development tools that have arisen in the last decade.

The transition will be painfully slow, and of course there are some niches where specialized hardware and real-time control will preclude the use of generic web tools-- motion control springs to mind. But I think that the combination of cheap hardware and modern web frameworks will crush the industrial controller market, like digital cameras did to film cameras.

First, some background on tiny computers

There are millions of tiny computers used for monitoring and control systems around the world. Let's break them into two categories: small (microcontrollers like the Arduino or a PIC development board, which runs $10-500) and large (embedded controllers like National Instruments hardware, which cost $500-5000).

To use the small ones, you write code from scratch, perhaps with some pre-written libraries to talk to certain peripherals and a bootloader to run your code on power-up. The vast majority cannot be connected to the internet without substantial effort, and when connected to the internet, they aren't powerful enough to work like most servers on the internet. For example, a webserver built on a small microcontroller would be overwhelmed by the background noise on the internet, i.e. traffic from bots and viruses. This kind of system is perfect if you want to log temperature in your basement, or turn on a light whenever the garage door is open. They're no good for running an inventory management system in a warehouse with 20 workers.

Below: an Arduino

The large ones come with an operating system, like Windows CE, Linux, or VxWorks. Most of the devices are reworked versions of hardware from the pre-internet days that have had Ethernet ports added to them. They can handle real internet traffic, but they usually use proprietary software to do it. They're shaped like a long, skinny shoebox.

Below: a larger controller

A programmable logic controller

The change is that something equivalent to the iPhone, which uses a 600 MHz ARM processor, can store years of data and serve it up to anyone with a web browser in seconds with hardware cost of a few hundred dollars. Industrial controllers lose on cost of both hardware and programming time, and performance. The hardware has to be expensive to support the R&D costs, which are borne entirely by each manufacturer. There's just no way that even a large industrial control company, which might have a dozen dedicated programmers at best, can compete with the thousands of developers working on open source web software worldwide.

How web software development has changed

In the 90's, software development for the internet meant either writing server software or designing static web pages. Starting around 2000 (give or take a few years), websites started incorporating dynamic data, which was stored in databases. Around 2005, a new kind of web software gained popularity-- the web framework-- with the release of Ruby on Rails.

With old-style web development, a web programmer would write code that inserted data into a database, more code that updated the database with new data, and more code that retrieved and sorted the data for presentation in a web page. With a web framework, the programmer writes out a template for how the data should be presented on a webpage, and the framework figures out what to request from the database. Web frameworks can't scale to the level of a big website like Amazon, but for low traffic systems, they work fine and reduce the programming time needed dramatically.

Most of the time, what people want to do with microcontrollers is log some data from sensors and maybe trigger some actuators in response. After they log the data, people want to analyze the data, make graphs with it, and then do it again, maybe with a different sensor. This matches well with the typical database-backed website. The only substantial additions are code to interact with hardware-- read sensors and trigger actuators. I think this is the least developed part of the systems I expect we'll see in the next few years.

Proof of concept

It's definitely possible that I'm just some blowhard on the internet. I mean, I'm at least some blowhard on the internet, but I might still be right in predicting this change. To that end, I've done some testing to see whether I'm going in the right direction.

Using an off-the-shelf microcontroller kit that cost $339 plus shipping, I installed a Python web framework called Django and wrote code to make it act like a thermostat to replace the one in our house. It took about 2.5 weekends to write the code, which is much faster than I could write a similar application for something like an Arduino with an Ethernet carrier board, and this was my first try. I had played around with Django a few times previously, but this was my largest effort by far.

Below: the proof of concept

The web thermostat in place

The hardware interface is just a cron job that runs once per minute. It queries a temperature sensor on the PCB using a short C program called by a Python script, which also logs the data to a SQLite database. Retrieval of a webpage that queries the database for the last 300 datapoints (5 hours worth) and generates a chart of the data using Javascript takes around 1.5 seconds. That's with a processor running at 250 MHz (relatively slow) and the database stored on an SD card. Most of the time is spent converting Python datetimes to Javascript timestamps.

I could have written a similar application even faster using a GUI tool like LabView, but that requires specialized hardware that cost 3-10 times more-- either a dedicated PC with a USB device for sensors, or a industrial controller with a sensor module. With Django, I got an administrative interface, allowing different users and groups different levels of access, for free. If I want a central repository of users with LabView, I'm looking at another $800 for the "Datalogging and Supervisory Control Module." If I want to talk to a SQL database, I'll need the "Database Connectivity Toolkit" for another $1000. This is on top of the $1500 I would have paid for LabView already, plus the hardware.

Embedded control systems running web services are still an immature technology at best, but I think they'll grow up quickly in the next few years.

Why are you writing this?

I've been thinking about this change for a year or two now. I'm working on developing something like the Arduino, but a little heavier duty, so it can run a web framework, but with the hardware drivers pre-integrated. Send me an email at brandon at pingswept.org or post a comment if you want to know when it exists for real.

March 31, 2010

Picking a Cortex A8 processor for an embedded Linux board in 2010

ARM has several different processor families in production at present. The newest releases are the Cortex processors, which come in three series-- A, R, and M. (See what they did there?) The M processors are the weakest and cheapest, below $10 even in low quantities. The R processors, intended for real-time applications like anti-lock brakes in cars, are the next step up. The top of the heap are the Cortex A series. So far, the A8 and A9 have been released, and the A5 is scheduled for release in 2011. The A8 is the processor at the heart of some recent smartphones, like the Iphone 3GS, the Nexus One and the Droid, for example.

Of the A8 and A9, the A8 is the cheaper, slower one, running in the 500 MHz to 1 GHz range; the A9 can have multiple cores and run up to 2 GHz. To me, the A8 crosses the threshold where we can build embedded systems that connect to the internet, have decent performance without requiring tuning to make applications run fast, and have a price in the $100-200 range. There are certainly many processors that can handle internet traffic in that price zone, like every decent consumer-grade Ethernet switch built in the last 10 years, but those are machines optimized for niche tasks. What's new is that we're finally getting enough clearance above the minimum requirements that cheap, general-purpose systems can function on the internet.

So if you want to make an embedded device that uses a Cortex A8, what chips can you buy? ARM is an unusual company in that they don't produce chips themselves; they license their designs to manufacturers in return for royalties on each chip sold. Right now, ARM lists 7 public licensees of the A8 design; in addition to those listed, Alchip and Ziilabs are claiming to have licensed the design. Several of the licensees, such as Ziilabs and Broadcom, are targeting niche multimedia applications and will likely only sell to large manufacturers of stuff like DVD players and video cameras. Of the remaining companies, two have released general-purpose A8 processors: Texas Instruments and Freescale.

Freescale has released the i.MX5x series, with two subfamilies: i.MX50 and i.MX51. They cost $30-40 in low quantities but the sole distributor listed (Avnet) reports a lead time of 26 weeks for all parts.

TI has two series of A8 processors: OMAP3 and the not-quite-released-yet Sitara AM35xx series. The OMAP3 series has been around since 2008, and there are several embedded Linux boards (Gumstix Overo, Beagleboard based around the OMAP35xx series, though none have a Ethernet port in their stock configuration. The first Sitara, probably the AM3517, will likely be the cheapest of the lot, but assuming we want to limit ourselves to chips that we can actually order, that leaves us with four choices (before we consider packaging): OMAP3503, OMAP3515, OMAP3525, and OMAP3530.

The two higher-end OMAP35xx chips, the OMAP3525 and OMAP3530, include a TMS320 DSP onboard, which is not worth the cost in a general purpose tool. This leaves us with the OMAP3503 or OMAP3515. The major difference between the two is the PowerVR SGX graphics accelerator in the OMAP3515, which, like the DSP, isn't worth the cost. Until the AM3517 or AM3505 hit the distributors, I think the OMAP3503 is the winner. We'd have to choose between the three different packages the chip comes in, but Digikey only has the CBB package (a 515-pin ball-grid array, distinguished from the CBC package by the pin spacing of 0.40 mm rather than 0.50 mm).

In the words of Captain Stillman, "Load and fire the weapon, soldier!"

February 24, 2010

Estimating air changes per hour with a blower door

When I was trying to figure out how big a gas boiler we needed for our house a few months ago, I tried to account for both the insulation in our walls as well as the air leaks that let warm air escape as cold air sneaks in. I had read that an old, drafty house turns over its volume in air once per hour. That seemed high to me, but I was looking for a conservative estimate, so that's what I used in my calculations. Since then, I've been hoping to find a way to make a better estimate.

Solution: Colin's blower door

The blower door in place

A friend of mine from Stanford, Colin Fay, runs a company with his dad, David Fay, called Energy Metrics. Last weekend, Colin and his nearly homonymic associate Cullen were kind enough to bring Colin's blower door over to our house to run a test to see how drafty our house is.

The basic idea of a blower door is this: you fill your front door with a curtain and a massive fan that forces air out of the house. While it's doing that, a small sensor measures the air pressure difference between the inside and outside of the house. There are a few different tests you can run, but the standard test that the fan controller runs is to automatically adjust the fan speed until the pressure inside is 50 Pa lower than outside. For comparison: 50 Pa is roughly equivalent to the pressure from a windspeed of 20 mph, but blowing at your house uniformly from all directions. Atmospheric pressure is around 100,000 Pa.

When the fan reaches a steady state, air is whistling in through all the gaps around your windows, doors and foundation, and you can tell where the problems are. For us, the largest draft was coming under the basement door. The next worst were the gaps between the sashes in our larger, older double-hung windows. In real life, I suspect that the gap under the basement door is not so bad-- the thermal gradient keeps the colder, denser air sunk down in the basement. I didn't realize it at the time, but most of the draft was probably coming down through our vestigial chimney.

Colin's blower door, a Retrotec 2000 with a DM-2 Mark II controller, pulled air through our house at 3900-4000 ft3 per minute to generate a pressure difference of 50 Pa. Our house has a volume of around 18000 ft3, so with the fan blowing, we were replacing all the air in our house every 4.5 minutes, or 13.3 times per hour.

Assembling the blower door frame

Assembling the blower door frame

The blower door from the inside

The blower door from the inside

Once you know how drafty your house is with a fan pulling the heavens through it, you need to scale that to match the typical conditions for your house. As a rough rule of thumb: just divide by 20. With the fan, we had 13.3 air changes per hour, so that's about 0.7 air changes per hour without the fan.

But if you want to ascend to the peak of Mount Energygeek, and you're willing to do it unashamedly, you can use the empirical corrections of Max Sherman of the Energy Performance of Buildings Group at Lawrence Berkeley National Lab, who completed his thesis on modeling building air infiltration in 1980, when oil rolled down like waters and righteousness like acid rain. You look up correction factors for climate (~18 for Boston), building height (0.7 for our house), wind shielding (1) and leakiness (1), multiply them together, and you've got a better correction factor than the rough guess of 20. For our house, we end up with 13.3/(18 x 0.7 x 1 x 1) = 1.06 air changes per hour.

With that knowledge, you can calculate the power required to offset the drafts cooling or heating your house. Our house, nominally a 1900 ft2 Victorian, has an internal volume of 18000 ft3, or 510 m3, so when it's 0 C outside, we're heating about 1.06 x 510 m3 of air per hour by around 20 C. The heat capacity of air is around 1200 J/m3C. That means we need to pour in 1200 J/m3C x 540 m3 x 20 C every 3600 seconds. By my calculation, that's about 3.6 kW.

The conductive heat loss model I developed for our house a few months ago when we were installing the boiler predicts that the conductive heat loss at the same temperature will be around 18 kW, so we lose about 1/6th of our heat from air infiltration.

Colin suggested we could reduce our draftiness by around 2x before we'd have to worry about the effects of too little fresh air (farts, basically). He suggested picking up a tube of transparent silicone caulk in the fall to fill the gap between the window sashes, as that's where our worst leaks are. In the spring, when it's time to open the windows again, the silicone peels off.

After seeing fellow energy geek Holly's sexy basement windows last weekend, I think I might look into replacing those as well.

older postsnewer posts