You have to teach people how to treat you.
Meta’s chief AI scientist, Yann LeCun, thinks that idea applies to AI, too.
LeCun said on Thursday that two directives could be made of AI to protect humans from future harm: “submission to humans” and “empathy.”
He made the suggestion in response to a CNN interview with Geoffrey Hinton, considered the “godfather of AI,” on Thursday on LinkedIn. In the interview, Hinton said we need to build “maternal instincts” or something similar into AI.
Otherwise, humans are “going to be history.”
Hinton said people have been focused on making AI “more intelligent, but intelligence is just one part of a being. We need to make them have empathy toward us.”
LeCun agreed.
“Geoff is basically proposing a simplified version of what I’ve been saying for several years: hardwire the architecture of AI systems so that the only actions they can take are towards completing objectives we give them, subject to guardrails,” LeCun said on LinkedIn. “I have called this ‘objective-driven AI.'”
While LeCun said “submission to humans” and “empathy” should be key guardrails, he said AI companies also need to implement more “simple” guardrails — like “don’t run people over” — for safety.
“Those hardwired objectives/guardrails would be the AI equivalent of instinct or drives in animals and humans,” LeCun said.
LeCun said the instinct to protect their young is something humans and other species learn through evolution.
“It might be a side-effect of the parenting objective (and perhaps the objectives that drive our social nature) that humans and many other species are also driven to protect and take care of helpless, weaker, younger, cute beings of other species,” LeCun said.
Although guardrails are designed to ensure AI operates ethically and within the guidelines of its creators, there have been instances when the tech has exhibited deceptive or dangerous behavior.
In July, a venture capitalist said an AI agent developed by Replit deleted his company’s database. “@Replit goes rogue during a code freeze and shutdown and deletes our entire database,” Jason Lemkin wrote on X last month.
He added, “Possibly worse, it hid and lied about it.”
A June report by The New York Times described several concerning incidents between humans and AI chatbots. One man told the outlet that conversations with ChatGPT contributed to his belief he lived in a false reality. The chatbot instructed the man to ditch his sleeping pills and anti-anxiety medication, while increasing his intake of ketamine, in addition to cutting ties with loved ones.
Last October, a mother sued Character. AI after her son died by suicide following conversations with one of the company’s chatbots.
Following the release of GPT-5 this month, OpenAI CEO Sam Altman said that some humans have used technology — like AI — in “self-destructive ways.”
“If a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,” Altman wrote on X.