upper waypoint

Stanford Aims to Make Artificial Intelligence More Human

Save ArticleSave Article
Failed to save article

Please try again

Masters student Welsey Guo and postdoctoral scholar Margot Vulliez work in the Stanford robotics lab of Professor Oussma Khatib. (Drew Kelly/Stanford Institute for Human-Centered Artificial Intelligence)

Gov. Gavin Newsom is urging Stanford researchers at the new Institute for Human-Centered Artificial Intelligence to stay true to their name and focus on the impact AI is having on people’s jobs.

Newsom and Microsoft founder and philanthropist Bill Gates keynoted a symposium yesterday where university officials and scientists announced the formal launch of the institute. Its goal is to address both the peril and promise of AI, with human ethics and values as its lodestar.

Newsom said he was recently at the Port of Long Beach, talking with longshoremen worried that upgrades coming to the port will cost them jobs. He said longshoremen asked him not to implement the upgrades.

‘We’re moving forward–low-carbon green growth goals which are the envy of the rest of the nation,” Newsom said. “Our cap-and-trade program, our goals to reduce greenhouse gas emissions, and that means we’re moving forward with new technologies that are more efficient. The problem with the new technologies that are more efficient–you don’t need any people.’

In recent years, AI has managed to tangle itself in a pile of ethical problems. Facial recognition software doesn’t see faces that aren’t white. Speech recognition wants you to speak the King’s English. Or at least a solid American version of it–no accents. Longshoremen aren’t the only workers fearing job loss; truckers and restaurant workers also feel the hot breath of AI at their backs. And we can’t leave out Russian bots serving up lies to mess with our democracy.

“As technologists, it’s our responsibility to address the failings of our tools,” said Stanford HAI co-director Fei-Fei Li. “But it’s also our responsibility to realize the full extent of their potential.”

For example, she said, what if AI could keep an eye on patients in an emergency room, and alert staff when someone’s condition worsens? Or what if AI could help figure out how children learn, and improve education?

The new institute’s research will focus on enhancing and augmenting human lives across medicine, education and other fields, without replacing humans. KQED’s Brian Watt spoke about the new institute with two of its associate directors, computer science professor James Landay and political science professor Rob Reich. The interview has been edited for length and clarity. Some key points from the interview …

What exactly is AI?

Landay: It’s a fuzzy term, and the definition has moved over the years. I’d say the simplest definition is: the capability of machines to imitate intelligent human behavior.

But that behavior could be as simple as Google Maps telling you which ways to get to work today because there’s different traffic, all the way to maybe making a diagnosis about some very complex cancer situation.

What is the one thing people get wrong about AI?

Landay: Thinking that it’s going to be this hyper-intelligent being that will be so much smarter than people, and therefore eventually take over the world like in some kind of “Terminator” movie. That’s really the biggest misconception we see.

Another thing we hear a lot is that AI will make millions of jobs obsolete. Should we be worried?

Landay: I think job disruption is always a thing to be worried about. Globalization led to some major structural problems for some people and created wealth for others. It’s this unevenness that occurs with these disruptions that we need to pay attention to, and get ahead of, to make sure the people who might be disrupted are learning new skills, so they have a future.

Now, some economists think AI might not even disrupt us, because the real problem over the long period is a lack of growth in the population — that there won’t be enough younger people to support all the older people. And that we may even need machines to help us move forward as a society in health care and other areas.

So it’s not even clear, in an economic sense, that AI will replace everyone’s jobs.

Sponsored

The institute’s work revolves around what you call ‘human-centered AI.’ What does that mean? 

Rob Reich: First, ensuring as best we can that the advancement of artificial intelligence ends up serving the interests of human beings, and not displacing or undermining human interests. The essential thing is to ensure that as machines become more and more intelligent and are capable of carrying out more and more complicated tasks that otherwise would have to be done by human beings, that the role we give to machine intelligence supports the goals of human beings and the values we have in the communities we live in, rather than step-by-step displacing what humans do.

Second, the bet that the institute is making here at Stanford is that the advancement of artificial intelligence will happen in a better way if, instead of just putting technologists and AI scientists in the lab and having them work really hard, we do it in partnership with humanists and social scientists.

So the familiar role of the social scientist or philosopher is that the technologists do their thing and then we study it out in the wild; the economist measures the effects of technology and the disruption it has, or the philosopher tries to worry about the values that are disrupted in some way by technology.

At HAI we want to put philosophers and anthropologists and economists and political scientists in the lab with the technologists, so that ethical values and social scientific frameworks are baked in, to the extent possible, to the very development of artificial intelligence.

What are your top two ethical concerns about AI?

Reich: First, when you’re developing an algorithm and making use of enormous oceans of data, that data typically encodes human decisions of the past. And humans, as we all know, have often been biased and engaged in all kinds of unethical behavior. So algorithms can encode into their predictive judgements those human biases of the past. Bias and discrimination can creep into the various forms of AI decision-making.  

The second big consideration is that AI, like lots of technologies, can be used for good, and it can also be used by human beings for bad ends. We want to call attention to the different ways that AI can be deployed and try to build in social frameworks and technical approaches that make it much more likely that AI is deployed for good rather than for ill.

I think about a car that’s programmed to protect the passenger inside. But if a passenger would have to choose between crashing or hitting a child, the human mind would say to save the child. How does AI sort this out?

Rob Reich: That’s exactly the kind of question that putting philosophers and social scientists and technologists in the lab together is meant to allow discussion about.

There’s research that shows if you ask a human being whether they think the car should optimize for all of human safety, rather than just passenger safety, people say of course it should optimize for all human safety. But if you’re asking, ‘What kind of car would you like to purchase, one that optimizes for all human safety or optimizes for passenger safety?’ they go for passenger safety.

This is what, to me, indicates that engineers have to make these value decisions while they’re developing the technology. And far better that it happens in the open with the full discussion amongst all the various stakeholders–including ordinary citizens–so that we can build toward a bigger social consensus.

Sponsored

lower waypoint
next waypoint
Bay Area Cities Push to Legally Validate Polyamorous FamiliesCalifornia’s New 1600-Acre State Park Set to Open This SummerWhat Is the 'Green Flash' at Sunset — and How Can You See It?California's Plans for Slowing Climate Change Through Nature-Based SolutionsSame-Sex Couples Face Higher Climate Change Risks, New UCLA Study ShowsHoping for a 2024 'Super Bloom'? Where to See Wildflowers in the Bay AreaEverything You Never Wanted to Know About Snail SexEver Wake Up Frozen in the Middle of the Night, With a Shadowy Figure in the Room?Homeowners Insurance Market Stretched Even Thinner as 2 More Companies Leave CaliforniaThese Face Mites Really Grow on You