Recently, researchers have been experimenting on algorithms using AI to spot deception. Read on to know more…
Being able to tell when a person is lying is an important part of everyday life, but it’s even more interesting to know that indeed it can be achieved through technology. This concept comes out straight from the movie — an algorithm to spot deception using Artificial Intelligence (AI). Researchers have designed a method which could detect a person’s intent to mislead.
Real Time Application
The development which could be used to extract opinion from “fake news” among other uses, was recently published as part of a paper in Journal of — Experimental and Theoretical Artificial Intelligence. Although previous studies have examined deception, this is possibly the first study to look at a speaker’s intent.
The researchers posit that while a true story can be manipulated into various deceiving forms, the intent, rather than the content of the communication, determines whether the communication is deceptive or not. For instance, the speaker could be misinformed or make a wrong assumption, meaning the speaker made an unintentional error but did not attempt to deceive.
“Deceptive intent to mislead listeners on purpose poses a much larger threat than unintentional mistakes,” said Eugene Santos Jr, co-author and professor of engineering at Thayer School of Engineering at Dartmouth. “To the best of our knowledge, our algorithm is the only method that detects deception and at the same time discriminates malicious acts from benign acts,” added Santos.
Santos believes the framework could be further developed to help readers distinguish and closely examine the intent of fake news. The researchers developed a unique approach and the resulting algorithm that can tell deception apart from all benign communications by retrieving the universal features of deceptive reasoning.
However, the framework is currently limited by the amount of data needed to measure a speaker’s deviation from their past arguments; the study used data from a 2009 survey of 100 participants on their opinions on controversial topics, as well as a 2011 dataset of 800 real and 400 fictitious reviews of the same 20 hotels.
Santos believes the framework could be further developed to help readers distinguish and closely examine the intent of “fake news,” allowing the reader to determine if a reasonable, logical argument is used or if opinion plays a strong role. In further studies, Santos hopes to examine the ripple effect of misinformation, including its impacts.
In the study, the researchers use the popular 2001 film ‘Ocean’s Eleven’ to illustrate how the framework can be used to examine a deceiver’s arguments, which in reality may go against his true beliefs, resulting in a falsified final expectation.
Because ‘Ocean’s Eleven’ is a scripted film, viewers can be sure of the thieves’ intent — to steal all of the money — and how it conflicts with what they tell the owner — that they will only take half.
“People expect things to work in a certain way, just like the thieves knew that the owner would call the police when he found out he was being robbed,” said Santos. “So, the thieves used that knowledge to convince the owner to come to a certain conclusion and follow the standard path of expectations. They forced their deception intent so the owner would reach the conclusions the thieves desired,” the author added.