Fake content is getting harder to suss out. This Canadian Nobel Prize winner has an idea to help

Artificial intelligence pioneer Geoffrey Hinton says it’s getting more difficult to tell videos, voices and images generated with the technology from material that’s real — but he has an idea to aid in the battle.

The increased struggle has contributed to a shift in how the British-Canadian computer scientist and recent Nobel Prize recipient thinks the world could address fake content.

“For a while, I thought we may be able to label things as generated by AI,” Hinton said Monday at the inaugural Hinton Lectures.

“I think it’s more plausible now to be able to recognize that things are real by taking a code in them and going to some websites and seeing the same things on that website.”

Hinton spoke at the first of the two-night Hinton Lectures event at the Global Risk Institute, taking place this week at the John W. H. Bassett Theatre in Toronto.

Hinton is seen at the Hinton Lectures in Toronto on Monday. (Evan Mitsui/CBC)

Hinton, who is often called the godfather of AI, took the stage briefly to remind the audience of the litany of risks he’s been warning the public about that the technology poses. He feels AI could cause or contribute to accidental disasters, joblessness, cybercrime, discrimination and biological and existential threats.

He said the labelling approach would verify content isn’t fake and imagines it could be particularly handy when it comes to political video advertisements.

“You could have something like a QR code in them [taking you] to a website, and if there’s an identical video on that website, all you have to do is know that that website is real,” Hinton explained.

Most Canadians have spotted deepfakes online and almost a quarter encounter them weekly, according to an April survey of 2,501 Canadians conducted by the Dais, a public policy organization at Toronto Metropolitan University.

Deepfakes are digitally manipulated images or videos depicting scenes that have not happened. Recent deepfakes have depicted Pope Francis in a Balenciaga puffer jacket and pop star Taylor Swift in sexually explicit poses.

At a news conference after the event, Hinton shared more about what he has done with his half of the $1.45 million he and Princeton University researcher John Hopfield received when they won the Nobel Prize for physics earlier in the month.

Hinton said he has donated half his share of the award to Water First, a Creemore, Ont., organization training Indigenous communities in how to develop and provide access to safe water systems.

He initially mulled giving some of the money to a water organization actor Matt Damon is involved with in Africa, but then he said his partner asked him: “What about Canada?”

WATCH | Hinton on the government’s role in AI: 

Nobel Prize winner Geoffrey Hinton on how governments should regulate AI

This year’s winner of the Nobel Prize in Physics is Geoffrey Hinton, a British Canadian known as the ‘godfather of AI.’ He speaks with CBC chief political correspondent Rosemary Barton about how governments should regulate the technology and its use in election campaigns.

That led Hinton to discover Water First. He said he was compelled to donate to it because of the land acknowledgments he hears at the start of many events.

“I think it’s great that they’re recognizing [who lived on the land first], but it doesn’t stop Indigenous kids getting diarrhea,” he said.

Hinton previously said some of his winnings will also be directed to an organization that provides jobs to neurodiverse young adults.

‘Worried pessimist’

The bulk of the evening even Monday was dedicated to a talk from Jacob Steinhardt, an assistant professor of electrical engineering and computer sciences and statistics at UC Berkeley in California.

Steinhardt told the audience he believes AI will advance even faster than many expect, but there will be surprises along the way.

By 2030, he imagines AI will be “superhuman,” when it comes to math, programming and hacking. 

He also thinks large language models, which underpin AI systems, could become capable of persuasion or manipulation.

“There is significant headroom, if someone were to try to train [them] for persuasiveness, perhaps either an unscrupulous company or a government that cared about persuading its citizens,” Steinhardt said. “There’s a lot of things you could do.”

He told the audience he sees himself as a “worried optimist,” who believes there’s a 10 per cent chance the technology will lead to human extinction and a 50 per cent chance it will cause immense economic value and “radical prosperity.”

Asked at a later news conference about Steinhardt’s “worried optimist” label, Hinton called himself a “worried pessimist.”

“There’s research showing that if you ask people to estimate risks, normal, healthy people way underestimate the risks of really bad things … and the people who get the risks about right are the mildly depressed,” Hinton said.

“I think of myself as one of those, and I think the risks are a bit higher than Jacob [Steinhardt] thinks — let’s say around 20 per cent.”

Comments (0)
Add Comment