Broussard

There is a tendency today to believe that technology holds the answer to all manner of social ills. We use technology to hire, pay bills, even choose romantic partners. But in this quest to do everything digitally, have we stopped demanding that technology actually works? In her new book "Artificial Unintelligence: How Computers Misunderstand the World" (MIT Press), journalist and software developer Meredith Broussard offers a guide to understanding the inner workings and outer limits of technology—and warns that we should never assume that computers always get things right. Here Broussard, who is an assistant professor at the Arthur L. Carter Journalism Institute at New York University, explains her argument.

How did this book get started?

Historically, people have made a lot of promises about the beautiful future of technology. I looked around one day and realised I’d been hearing exactly the same promises about a benevolent technological future for decades. Very few of the promises came true. In fact, many important things in the US are far worse since the beginning of the Internet age. We have massive income inequality, a resurgence of racism, widespread sexual harassment, and online distribution of illegal drugs. It’s time to start questioning whether our decisions around society and technology are leading us toward the future we really want.

Your book challenges the assumption that computers are always right. Where do you think this assumption comes from?

Technochauvinism is what I call the assumption that technology is always the solution. In reality, it’s more nuanced than that. Computers are very good at completing mathematical tasks quickly, over and over. Over the years, smart people have figured out how to transform a variety of tasks into mathematical calculations. Computer vision, for example, is all about mathematical representations and transformations of pixels on a grid. That’s amazing! However, not everything in the world can be transformed into math.

I think we’ve gotten to the point as a society where we have gotten so carried away with the idea of technological innovation that we have started blindly assuming we should use computational technology for absolutely everything. There’s an expression: when all you have is a hammer, everything looks like a nail. Computers are our hammers right now.

When do computers get it wrong?

For one thing, computers aren’t capable of using common sense. They are also constrained by hardware and software—in other words, they break. Computational systems are only as good as the people who make them. As humans, we are all slightly flawed—and so are our technological systems.

Should we be concerned about the way that technology is infiltrating so many areas of life?

I’m concerned that we need more nuance to how we talk about tech. The term “artificial intelligence” is vague. When you say “AI,” you might be talking about computational statistics (which are real), or about Arnold Schwarzenegger as the Terminator (which is imaginary). The linguistic confusion gets in the way of understanding each other and also gives rise to a lot of magical thinking around computers. In the book, I show readers exactly what it looks like when someone is “using AI”. Usually when someone says they are using AI, it means they are deploying a specific kind of AI called machine learning. I demonstrate using machine learning to predict who survived the Titanic disaster.

I’m also concerned that we don’t have enough diversity among the people who make technological decisions. In the US, Silicon Valley is overwhelmingly controlled by affluent white libertarian men of a certain age. This lack of diversity tends to emerge in tech systems in unexpected ways. For example, have you seen the viral video about the racist soap dispenser? Someone with light skin waves their hand under the optic sensor on the soap dispenser, and the soap comes out. Someone with dark skin waves their hand under the sensor, and the soap doesn’t come out. Then, the dark skinned person gets a white paper towel and waves it under the sensor—and the soap comes out. The sensor doesn’t pick up the dark skinned person because nobody on the tech team bothered to make sure that the system worked for people with dark skin. As a person of color who’s worked in tech, I have no problem believing that on the entire team of designers, developers, testers, sales and marketing folks, there probably wasn’t a single dark skinned person—and nobody thought this was a problem.

It’s a problem.

This is very similar to the problem of Kodak film in the 1950s, where the film stock was optimised for a colour palette printed on what were called “Shirley cards,” which were photos of a white woman (the first model was called Shirley). Basically, if you had dark skin, up until the 1970s your skin wouldn’t look good on film, because the people who made the ubiquitous technology had decided that your experience with the technology didn’t matter. Remember, this issue with film happened in the 1950s. Now, in 2018, we have new technology—the soap dispenser—that has exactly the same problem of representation, and also bias embedded in the technology. I don’t think that repeating mistakes is the way to move us all toward a better world.

You write that we have stopped demanding that our technology actually work. Could you expand?

I live in a high-rise apartment building in New York City with a computer-controlled elevator. A few times a month, I push the button for my floor and the elevator takes me to either one floor above, or one floor below, where I want to go. It happens to everyone else in the building too. We talk about it in the elevator. Little things like elevators or soap dispensers reveal bigger problems in the design and maintenance of the large systems that surround us. A whole team of highly trained engineers and maintenance people and inspectors have been caring for my building’s elevators for years. If they can’t get the elevator computer to go to the correct floor, it doesn’t inspire confidence that larger, more complicated systems—like self-driving cars—will work properly either.

We hear a lot about the coming revolution in driverless cars. What’s your take?

There have been a lot of promises around driverless cars, but like many visions of technological Utopias, very few of them have come true. In the book, I tell the story of the first time I rode in a driverless car in 2007. I thought I was going to vomit, or die, or both. The technology has improved since then, but not enough for everyday use.

Consider GPS jammers, which are one of the many things that can confuse the GPS sensors in a driverless car. The car navigates itself using GPS; that’s how it figures out where it is, and where it should go next. However, you can buy a GPS jammer online for under $50. Jammers interfere with GPS reception within a fixed radius. Lots of people buy them in order to go through toll booths for free, or to fly drones in no-fly zones. Let’s say that you put your kid into a driverless taxi to go to soccer practice—and a truck with a GPS jammer pulls up next to it while both vehicles are travelling at high speed. The driverless car is going to crash, with your kid inside.

What’s your view of the impact of computer technology in the personal and romantic sphere?

I was very enthusiastic about the promise of online dating when it first became big in the 90s. Today, I am more sceptical. My friends who do it don’t seem happy about it. Every woman I know has gotten rape threats, death threats, and unwanted obscene photos from dudes on online dating sites. I’m not in that scene—I’m happily married—so I won’t claim any expertise or first-hand experience. But I will say that from the outside, it doesn’t look like this system is better than before.

What are the limits of what we can do with technology?

It’s important to keep in mind what is real and what is imaginary. Intergalactic space travel: imaginary. Most Hollywood versions of AI: imaginary. Real: bits, bytes, binary, circuits, human nature. It’s easy to get so excited about using technology that we forget to think about the downstream effects. There was a recent story about fitness trackers, where someone looked at a map of public data about people using fitness trackers and discovered that it showed exactly how to identify the location of military bases. Of course this was going to happen! It’s public data, and people are wearing tracking devices. We need to do a better job of understanding how technology actually works, and then we need to make good ethical decisions based on that understanding—coupled with our knowledge of the good and bad sides of human nature.

How should we be applying tech to make the world better?

There are lots of great ways to apply technology to make the world better. I would like to start by using technology to improve education. Many people think that using technology for education means getting every student a laptop or an iPad or putting lectures online. I mean something different. I want to use technology to make sure schools have all of the basic materials they need to help kids. In the US, schools don’t have enough books or paper or pencils. The buildings are out of date; they lack heat, or air conditioning, or they have water pouring in through a leaky roof. I’d like to use technology to make sure every school has its basic needs met. Then, we could move on to the fancier stuff like computers for everyone.