95 stories

I installed Debian on a Thinkpad X220

1 Comment

I installed Debian 10 (buster) on a Lenovo Thinkpad X220 laptop today. You’re justified in thinking that this isn’t a feat worth blogging about. The interesting bit is that I did it with v-i.

No, not the editor. v-i is another of my silly personal tool projects that has an awful name. v-i is “vmdb2 installer”. vmdb2 is my “install Debian in a disk image” tool. The name v-i is a play on d-i, the name or nickname of the official Debian installer.

Why not use d-i, instead of writing my own installer? Some years ago I asked myself “how hard can it be?”. That question is the most dangerous question for me. It is the bane of my existence. When I’m found on Mars, surrounded by the remains of an exploded rocket ship, they shall put up a memorial there engraved with “He asked how hard can it be. Reality answered, too hard.”

I wrote vmdb2 to be able to more easily create custom disk images for my virtual machines. It then occurred to me that the program should easily work on bare metal hardware as well.

After many misadventures and much debugging, I got it to work today. Well, technically I got it to work a year or two ago, but I didn’t tell anyone, and then I broke it, but now it works again. I’m blogging about it so that you can laugh at the things I spend my free time on.

There is absolutely no reason why you should try it. The d-i installer works better. I’m interested in v-i because I’m interested in trying another approach than d-i, something that’s more automated. If I had the energy, I’d develop v-i so that it’d be as easy to install on bare metal as it is to create a VM.

The way v-i works is you boot an X220 off a USB drive with a v-i image on it, and then run a single command to install Debian onto sda. The command runs vmdb2. Then you boot the laptop and have a very minimal Debian. The system has just the output of debootstrap with a couple of other necessary packages installed to create a system that boots and is usable, plus a very small amount of additional configuration using Ansible.

It’s not a great system, but at least my little installer project works. At least for some cases. At least on an X220. At least on my X220.

(See v-i and vmdb2 repositories, if you want to know more. If developing a new Debian installer sounds like something you’d like to work on, fork and hack away. Beware of boot loaders.)

Read the whole story
36 days ago
first paragraph had me :)
Share this story

Wednesday, March 24, 2021 - the weather

1 Share

Wednesday, March 24, 2021

the weather

Written back in March, posted 2021-07-14. Discusses a mass shooting.

I moved out of Boulder almost a decade ago. Writing this now, I don’t remember if I thought I was making a decision about leaving Boulder. I think I figured I’d be back sooner or later. I was just getting worn out on living in basements, my landlords upstairs were about to have a baby, and it seemed like time to make a change. When I went to look, it turned out I could rent a massive old 3 bedroom house in one of the L-towns for what a decent above-ground apartment was running in Boulder.

When I left, the exodus of most people I knew in town was just getting underway. The stuff that made it permanent seems pretty concrete and inescapable now, but it accumulated gradually. One formulaic conversation about real estate and the money moving in at a time; the same story as every other place in America that people from somewhere else want to live.

Looking back on it now, those two years in a basement in South Boulder were the best that town ever treated me. Martian Acres, with Martin Park for a back yard. The bike path all the way out to Gunbarrel for work, or jamming onto the crowded bus up Broadway. Beers at the Southern Sun, breakfast at the Walnut Cafe to go with the hangovers.

There’s nothing much extraordinary about that part of town. As far as I know, it’s just 1950s and 60s development that grew into something lived in. Cheap little ranch houses on irrationally curving streets. It felt a little more real than the places the money had completely eaten by then, and by virtue of that reality also maybe a little weirder in the way things around here are supposed to be weird. They get fewer by the year, but Boulder as I knew it was a place of little pocket-universe neighborhoods. You’d find yourself in some hidden corner and think: This is how it used to be. This is why people keep coming back.

People in that part of town were good to me. It’s the part I always feel like I can still imagine living in.

There are things you remember about a neighborhood. Mundane but also defining. I wind up with strong opinions about grocery stores. The Table Mesa one was my favorite King Soopers around here. Nice produce selection, friendly people at the checkout.

A couple of days ago, a guy walked in the door there and shot ten people to death with, most probably, an AR-15 knockoff. Nobody I know died, though I was as worried about that as I’ve ever been during one of these.

Some unbelievable asshole was streaming from the parking lot on YouTube during all of this. I watched more of it than I feel good about, with a more acute version of that same sick dread you feel when a tornado is bearing down on somewhere you know.

This is the weather in America. If you live in a place where the violence is usually at a distance, you put it in the mental background. You figure today probably isn’t the day a mass murder hits while you’re picking up groceries or going to work. Most days aren’t. You’d take sensible precautions but there aren’t any to take. It’s like living in tornado alley, but you can’t look for a house with a basement.

I hate my country.

p1k3 / 2021 / 3 / 24
tags: topics/boulder, topics/colorado, topics/violence

Read the whole story
63 days ago
Share this story


1 Share
I built a model that combines local case rates and vaccination stats to estimate when it's reasonable to attend various types of party, but I forgot to include anything about where to find them.
Read the whole story
112 days ago
Share this story

Sponsored-by proposal

1 Share

Executive summary

To make sponsorship of free and open source software more visible, add a Sponsored-by pseudo-header to git commit messages.

The problem

This is an idea I had together with Daniel Silverstone.

Free and open source software is sometimes funded by its authors: they work on it in their free time. Sometimes development is funded by a company who employs people to develop the software. Sometimes it’s fully or partially funded by donations or gifts: some party gives money to the developers so they can work on the software, but not as employment; we call this sponsorship.

Overall, for any particular project, it’s unclear how it’s funded. Sometimes the project makes it clear, but often it’s not clear. In a large project, with different parties funding parts of the work such as the Linux kernel, it’s hard to keep track of who funds the work. Currently it is done by heuristics based on author commit email addresses.

One of the problems is that employment and sponsorship tends to be scarce, and difficult to get, and even many important, popular software projects do not have people who can work on it full time. This hurts the quality of the software, and slows down its development significantly.

A partial solution

We propose that it would help to make sponsorship more obvious. Apart from the project’s web site, each commit could label the work as sponsored using a pseudo-header:

Sponsored-by: Example Corp.

Any commits done as part of sponsored work would have this. This would raise the visibility of sponsorship, thereby hopefully making it more interesting to sponsor.


We suggest the following specification as a base of discussion:

  • sponsorship of work to produce a commit is marked by one or more Sponsored-by pseudo-headers in the commit message
  • lack of such a header does not say anything about whether the work was sponsored; use of the header is optional
  • a header only applies to the commit it appears in
  • all headers referring to the same sponsor should try to use the same value so it’s easier to collect statistics
  • the value has the same format as a git commit author field; it can be a bare email address, or lack an email address
  • the special value author means the work was done without sponsorship


  • Sponsored-by: author
  • Sponsored-by: Lars Wirzenius <liw@liw.fi>
  • Sponsored-by: Wikimedia Foundation
  • Sponsored-by: IBM <https://www.ibm.com/>

Examples of commits using this:

This is a new idea, and has barely been tried. What do you think?

Edited to add: It turns out the FreeBSD development community has a tradition of marking commits as having being sponsored. See for example:

Read the whole story
112 days ago
Share this story

Lying to the ghost in the machine

3 Comments and 13 Shares

(Blogging was on hiatus because I've just checked the copy edits on Invisible Sun, which was rather a large job because it's 50% longer than previous books in the series.)

I don't often comment on developments in IT these days because I am old and rusty and haven't worked in the field, even as a pundit, for over 15 years: but something caught my attention this week and I'd like to share it.

This decade has seen an explosive series of breakthroughs in the field misleadingly known as Artificial Intelligence. Most of them centre on applications of neural networks, a subfield which stagnated at a theoretical level from roughly the late 1960s to mid 1990s, then regained credibility, and in the 2000s caught fire as cheap high performance GPUs put the processing power of a ten years previous supercomputer in every goddamn smartphone.

(I'm not exaggerating there: modern CPU/GPU performance is ridiculous. Every time you add an abstraction layer to a software stack you can expect a roughly one order of magnitude performance reduction, so intuition would suggest that a WebAssembly framework (based on top of JavaScript running inside a web browser hosted on top of a traditional big-ass operating system) wouldn't be terribly fast; but the other day I was reading about one such framework which, on a new Apple M1 Macbook Air (not even the higher performance Macbook Pro) could deliver 900GFlops, which would put it in the top 10 world supercomputers circa 1996-98. In a scripting language inside a web browser on a 2020 laptop.)

NNs, and in particular training Generative Adversarial Networks takes a ridiculous amount of computing power, but we've got it these days. And they deliver remarkable results at tasks such as image and speech recognition. So much so that we've come to take for granted the ability to talk to some of our smarter technological artefacts—and the price of gizmos with Siri or Alexa speech recognition/search baked in has dropped into two digits as of last year. Sure they need internet access and a server farm somewhere to do the real donkey work, but the effect is almost magically ... stupid.

If you've been keeping an eye on AI you'll know that the real magic is all in how the training data sets are curated, and the 1950s axiom "garbage in, garbage out" is still applicable. One effect: face recognition in cameras is notorious for its racist bias, with some cameras being unable to focus or correctly adjust exposure on darker-skinned people. Similarly, in the 90s, per legend, a DARPA initiative to develop automated image recognition for tanks that could distinguish between NATO and Warsaw Pact machines foundered when it became apparent that the NN was returning hits not on the basis of the vehicle type, but on whether there was snow and pine forests in the background (which were oddly more common in publicity photographs of Soviet tanks than in snaps of American or French or South Korean ones). Trees are an example of a spurious image that deceives an NN into recognizing something inappropriately. And they show the way towards deliberate adversarial attacks on recognizers—if you have access to a trained NN, you can often identify specific inputs that, when merged with the data stream the NN is searching, trigger false positives by adding just the right amount of noise to induce the NN to see whatever it's primed to detect. You can then apply the noise in the form of an adversarial patch, a real-world modification of the image data being scanned: dazzle face-paint to defeat face recognizers, strategically placed bits of tape on road signage, and so on.

As AI applications are increasingly deployed in public spaces we're now beginning to see the exciting possibilities inherent in the leakage of human stupidity into the environment we live in.

The first one I'd like to note is the attack on Tesla car's "autopilot" feature that was publicized in 2019. It turns out that Tesla's "autopilot" (actually just a really smart adaptive cruise control with lane tracking, obstacle detection, limited overtaking, and some integration with GPS/mapping: it's nowhere close to being a robot chauffeur, despite the marketing hype) relies heavily on multiple video cameras and real time image recognition to monitor its surrounding conditions, and by exploiting flaws in the image recognizer attackers were able to steer a Tesla into the oncoming lane. Or, more prosaically, you could in principle sticker your driveway or the street outside your house so that Tesla autopilots will think they're occupied by a truck, and will refuse to park in your spot.

But that's the least of it. It turns out that the new hotness in AI security is exploiting backdoors in neural networks. NNs are famously opaque (you can't just look at one and tell what it's going to do, unlike regular source code) and because training and generating NNs is labour- and compute-intensive it's quite commonplace to build recognizers that 'borrow' pre-trained networks for some purposes, e.g. text recognition, and merge them into new applications. And it turns out that you can purposely create a backdoored NN that, when merged with some unsuspecting customer's network, gives it some ... interesting ... characteristics. CLIP (Contrastive Language-Image Pre-training) is a popular NN research tool, a network trained from images and their captions taken from the internet. [CLIP] learns what's in an image from a description rather than a one-word label such as "cat" or "banana." It is trained by getting it to predict which caption from a random selection of 32,768 is the correct one for a given image. To work this out, CLIP learns to link a wide variety of objects with their names and the words that describe them.

CLIP can respond to concepts whether presented literally, symbolically, or visually, because its training set included conceptual metadata (textual labels). So it turns out if you show CLIP an image of a Granny Smith, it returns "apple" ... until you stick a label on the fruit that says "iPod", at which point as far as CLIP is concerned you can plug in your headphones.

NN recognizing a deceptively-labelled piece of fruit as an iPod

And it doesn't stop there. The finance neuron, for example, responds to images of piggy banks, but also responds to the string "$$$". By forcing the finance neuron to fire, we can fool our model into classifying a dog as a piggy bank.

The point I'd like to make is that ready-trained NNs like GPT-3 or CLIP are often tailored as the basis of specific recognizer applications and then may end up deployed in public situations, much as shitty internet-of-things gizmos usually run on an elderly, unpatched ARM linux kernel with an old version of OpenSSH and busybox installed, and hard-wired root login credentials. This is the future of security holes in our internet-connected appliances: metaphorically, cameras that you can fool by slapping a sticker labelled "THIS IS NOT THE DROID YOU ARE LOOKING FOR" on the front of the droid the camera is in fact looking for.

And in five years' time they're going to be everywhere.

I've been saying for years that most people relate to computers and information technology as if they're magic, and to get the machine to accomplish a task they have to perform the specific ritual they've memorized with no understanding. It's an act of invocation, in other words. UI designers have helpfully added to the magic by, for example, adding stuff like bluetooth proximity pairing, so that two magical amulets may become mystically entangled and thereafter work together via the magical law of contagion. It's all distressingly bronze age, but we haven't come anywhere close to scraping the bottom of the barrel yet.

With speech interfaces and internet of things gadgets, we're moving closer to building ourselves a demon-haunted world. Lights switch on and off and adjust their colour spectrum when we walk into a room, where we can adjust the temperature by shouting at the ghost in the thermostat, the smart television (which tracks our eyeballs) learns which channels keep us engaged and so converges on the right stimulus to keep us tuned in through the advertising intervals, the fridge re-orders milk whenever the current carton hits its best-before date, the robot vacuum comes out at night, and as for the self-cleaning litter box ... we don't talk about the self-cleaning litterbox.

Well, now we have something to be extra worried about, namely the fact that we can lie to the machines—and so can thieves and sorcerors. Everything has a True Name, and the ghosts know them as such but don't understand the concept of lying (because they are a howling cognitive vacuum rather than actually conscious). Consequently it becomes possible to convince a ghost that the washing machine is not a washing machine but a hippopotamus. Or that the STOP sign at the end of the street is a 50km/h speed limit sign. The end result is people who live in a world full of haunted appliances like the mop and bucket out of the sorcerer's apprentice fairy tale, with the added twist that malefactors can lie to the furniture and cause it to hallucinating violently, or simply break. (Or call the police and tell them that an armed home invasion is in progress because some griefer uploaded a patch to your home security camera that identifies you as a wanted criminal and labels your phone as a gun.)

Finally, you might think you can avoid this shit by not allowing any internet-of-things compatible appliances—or the ghosts of Cortana and Siri—into your household. And that's fine, and it's going to stay fine right up until the moment you find yourself in this elevator ...

Read the whole story
188 days ago
194 days ago
Share this story
3 public comments
187 days ago
"As AI applications are increasingly deployed in public spaces we're now beginning to see the exciting possibilities inherent in the leakage of human stupidity into the environment we live in."

Denver, CO
195 days ago
I work adjacent to this field and everything he says is accurate.
Pittsburgh, PA
195 days ago
"Am now envisaging a hotel evacuation at 3am in midwinter because some prankster scribbled FIRE on a whiteboard in the hotel lobby."
Earth, Sol system, Western spiral arm
193 days ago
I think I'll get a hoodie with a stop sign on the front and back for my son.

Wednesday, March 3, 2021

1 Share

Wednesday, March 3, 2021

We loved computers: That’s a simplification, almost a category error. What happened is we found computers, we got on the network, and before long we lived as much inside the possibility space of computing as we did anywhere else.

Maybe what we got wrong is this: From the beginning, computers appeared to us as a kind of liberation. Because we were young and our horizons were close, we mistook the ways they opened the world to us for their most important quality. What we couldn’t see then was that they were born as instruments of the oppressor, and would help us become the same.

Even when we grasped that the scaffolding of computation came from power, when we were running free around those systems we felt like we understood their real purpose in a way that the institutions that built and purchased them couldn’t. Nevermind that they couldn’t exist without an industrial economy, ranked tiers of exploited workers, and a relentlessly degraded environment.

Computation was a power that we could see how to take for ourselves. It unfolded in front of us in a way that the authorities in our lives could, for the most part, barely even perceive. Sometimes they’d glimpse it and lash out in fear or contempt. We mistook their fear for a sign we were on the right track.

And maybe some of us were, for a while. But we didn’t understand that what power serves is usually power itself.

p1k3 / 2021 / 3 / 3

Read the whole story
196 days ago
Share this story
Next Page of Stories