It refers to a gap, however big or small, between what humans intend, and what AI in turn does. ![]() "But now I'm much less sure." Why experts fear disobedient AIĪt the heart of the debate about the existential risk of AI is "the alignment problem". And there will be so many good things in between that it's worth continuing,'" he says. "I thought, 'Oh, this is so far in the future that I don't need to worry. Like most of the world, Professor Bengio had always assumed we had decades to prepare, but thanks in no small part to his own efforts, that threshold is now much closer. ![]() ![]() That milestone, depending how you measure it, is referred to as artificial general intelligence (AGI) or more theatrically, the singularity.ĭefinitions vary, but every expert agrees that a more sophisticated version of AI that surpasses human capabilities in some, if not all, fields is coming, and the timeline is rapidly shrinking. The Rubicon moment he's thinking of is when AI surpasses human capabilities. "Even if it was 0.1 per cent , I would be worried enough to say I'm going to devote the rest of my life to trying to prevent that from happening," he says. "I think that the chances that we will be able to hold off such attacks is good, but it's not 100 per cent … maybe 50 per cent," he says.Īs a result, after almost 40 years of working to bring about more sophisticated AI, Yoshua Bengio has decided in recent months to push in the opposite direction, in an attempt to slow it down. Professor Bengio arrived at the figure based on several inputs, including a 50 per cent probability that AI would reach human-level capabilities within a decade, and a greater than 50 per cent likelihood that AI or humans themselves would turn the technology against humanity at scale. Speaking with Background Briefing, Professor Bengio shared his p(doom), saying: "I got around, like, 20 per cent probability that it turns out catastrophic." "What I've been saying now for a few weeks is 'Please give me arguments, convince me that we shouldn't worry, because I'll be so much happier.' "We don't know how much time we have before it gets really dangerous," Professor Bengio says. They're imagining an AI with a penchant for sleight of hand, adept at concealing any gap between human instructions and AI behaviour.īut now, he believes we're travelling too quickly down a risky path. Instead, there's an emerging group of machine learning experts and industry leaders who are worried we're building "misaligned" and potentially deceptive AI, thanks to the current training techniques. These concerns aren't coming from conspiracy theorists or sci-fi writers though. The scenarios contemplated as part of that conversation are terrifying, if seemingly farfetched: among them, biological warfare, the sabotage of natural resources, and nuclear attacks. So your p(doom), if you have one, is your best guess at the likelihood - expressed as a percentage - that AI ultimately turns on humanity, either of its own volition, or because it's deployed against us. ![]() The "doom" component is more subjective but it generally refers to a sophisticated and hostile AI, acting beyond human control. It's both a dark in-joke and potentially one of the most important questions facing humanity. Artificial intelligence experts have been asking each other a question lately: "What's your p(doom)?"
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |