When AI asks stupid questions, it gets smart fast | Sciences

If someone shows you a picture of a crocodile and asks if it’s a bird, you might laugh — and then, if you’re patient and kind, help them identify the animal. Such real-world, sometimes dumb interactions may be key to helping AI learn, according to a new study in which the strategy dramatically improves the AI’s accuracy in interpreting new images. This approach could help AI researchers more quickly design software that does everything from diagnosing diseases to directing robots or other devices around homes themselves.

“It’s pretty cool work,” says Natasha Jack, a Google computer scientist who studies machine learning but was not involved in the research.

Many AI systems get smarter by relying on a brute force method called machine learning: it finds patterns in data, for example, to figure out what a chair looks like after analyzing thousands of images of furniture. But even huge data sets have gaps. Sure, this object in the picture is called a chair – but what is it made of? And can you sit on it?

To help AI systems expand their understanding of the world, researchers are now trying to develop a way for computer programs to spot gaps in their knowledge and figure out how to ask strangers to fill them in — just as a child asks a parent why the sky is blue. The ultimate goal of the new study was an AI that could correctly answer a variety of questions about images it hadn’t seen before.

Previous work on “active learning,” in which AI assesses its own ignorance and demands more information, has often asked researchers to pay online workers for providing such information. This approach does not scale.

So in the new study, researchers at Stanford University led by Ranjay Krishna, now at the University of Washington, Seattle, trained a machine-based system to not only detect gaps in knowledge but also to form (often dumb) questions about which images strangers would answer. patiently. (Q: “What does the pelvis look like?” A: “It is square.”)

It’s important to think about how AI presents itself, says Kurt Gray, a social psychologist at the University of North Carolina, Chapel Hill, who has studied human-AI interactions but was not involved in the work. “In that case, you want him to be kind of like a kid, right?” He says. Otherwise, people may think you are a dwarf for asking seemingly silly questions.

The team “rewarded” its AI for writing clear questions: When people actually responded to a query, the system received feedback telling it to adjust its internals so that it behaves similarly in the future. Over time, artificial intelligence has implicitly taken lessons in language and social norms, honing its ability to ask sensitive and easily answerable questions.

Coconut cake
Q: What kind of candy is in the picture? A: Hi dear it’s coconut cake, it tastes amazing 🙂 R. Krishna and others. , PNASdoi: 2115730119 (2022)

The new AI has several components, some of which are neural networks and complex mathematical functions inspired by the structure of the brain. “There are so many moving pieces… that all need to be played together,” Krishna says. One component selects a photo on Instagram – say sunset – and the second asks a question about that photo – for example, “Was this photo taken at night?” The plugins extracted facts from readers’ responses and learned images from them.

Over 8 months and over 200,000 questions on Instagram, The system’s accuracy in answering questions similar to those it asked increased by 118%.the team reports today in Proceedings of the National Academy of Sciences. A comparison system that posted questions on Instagram but was not explicitly trained to maximize response rates only improved its accuracy by 72%, in part because people ignore it so frequently.

The main innovation, Jack says, was to reward the system for getting humans to respond, “which isn’t that crazy from a technical perspective, but it’s very important from a research direction perspective.” I also liked the widespread real-world posting on Instagram. (Humans scanned all AI-generated questions for offensive material before posting.)

Researchers hope that systems like theirs will eventually help AI understand common sense (Knowing, say, that the chairs are made of wood)And the interactive robots (An AI vacuum that asks for directions to the kitchen)and chatbots (that talk to people about customer service or the weather).

Jack says social skills can also help AI adapt to new situations on the go. A self-driving car, for example, might ask for help navigating a construction area. “If you can effectively learn from humans, that’s a very general skill.”

Leave a Comment