The History of AI Tricking Humans

Bitdefender Enterprise

October 20, 2022

The History of AI Tricking Humans

In the Western world, the earliest known case of fraud was recorded in 300 BC. Since then, scam artists, fraudsters, and other bad actors have been taking advantage of humans. As our technology evolves, so do the malicious schemes. Now, artificial intelligence (AI) has joined the long list of human tricksters and con artists.

The use of AI and machine learning (ML) technologies raises phishing and other cyber scams to a completely new level, making it even more difficult for organizations to detect and identify the threat. Compared to human cyber criminals, algorithms learn to take advantage of their victims on a bigger scale and more efficiently.

The best phishing artist

By using deep learning language models such as GPT-3 hackers can easily launch mass spear phishing campaigns — an attack that involves sending personalized messages. 

At the 2021 Black Hat and Defcon security conferences in Las Vegas, a team from Singapore's Government Technology Agency presented an experiment in which their colleagues received two sets of phishing emails — humans wrote the first, while an algorithm wrote the second. Al won the contest by garnering far more clicks to the links in the emails compared to the human-written messages. 

With the number of AI platforms multiplying, hackers can take advantage of the low-cost AI-as-a-service to target a large pool of victims, as well as personalize their emails up to the minuscule details so they appear as real as possible.

A long history of artificial intelligence deceit

While AI-as-a-service attacks are a relatively new — and still rare — phenomena, machines have been known to deceive humans for a few decades. 

Joseph Weizenbaum wrote back in the nineteen sixties the first program, Eliza, that was capable of communicating in natural language and pretended to be a psychotherapist. 

Weizenbaum was surprised that his MIT lab staff viewed what is now known as a "chatbot" as a real doctor. They shared their private information with Eliza, shocked to learn that Weizenbaum was able to check the conversation logs between them and the “doctor.” Weizenbaum was shocked too — that such a simple program could so easily dupe users into revealing their personal data.

In the decades since, humans haven’t learned to distrust computers and grew even more susceptible. 

Just recently, an engineer at Google announced — to the surprise of the management and his colleagues — that the company’s AI chatbot LaMDA (Language Model for Dialogue Applications) has come to life. Blake Lemoine said he believes that LaMDA is sentient like a human. LaMDA was able to imitate human conversations so well that even a seasoned engineer was fooled by it. 

Today's conversational models base their responses on the information they learn from social media, prior interactions, and human psychology, and this doesn’t indicate they understand the meaning of words or have feelings. But this level of human behavioral and social mimicry allows hackers to leverage tools similar to LaMDA to data-mine social media and send or post spear phishing messages.

The emergence of deepfake social engineering

AI is capable not only of conversational mimicry. Both voice style transfer, also known as voice conversion (digital cloning of a target’s voice), and deepfakes (a fake image or video of a person or an event) are a very real cyberthreat. 

In 2019, in what many experts believe to be the first documented case of such an attack, fraudsters used voice conversion to impersonate a CEO and ask for an urgent transfer of funds (to their accounts). A year later, in 2020, another group of tricksters used the technology to mimic a client’s voice to convince a bank manager to transfer $35 million to cover “an acquisition.”

Such AI-powered targeted social engineering attacks have been expected, yet most organizations aren’t prepared. In an Attestiv survey, 82% of business leaders acknowledged that deepfakes pose a risk, but less than 30% have taken any steps to minimize it.

Tricking the trickster AI

With so many different cyber attacks to watch out for 24x7, it’s no wonder that organizations are overwhelmed. To date, AI-designed attacks have proven to be effective, they are finely targeted and difficult to attribute. AI technologies are continuously expanding existing cybersecurity threats, which could potentially become too “smart” for humans to grasp. 

And as history shows, humans can be easily tricked, but AI tools can also help us. Deepfakes can be countered with algorithms fine-tuned to spot them. Advanced analytics tools can help flag fraudulent or suspicious activitie s. Machine learning models can successfully target the most sophisticated phishing and scam attempts. 

Learn about cyber resilience and how Bitdefender can help you mitigate existing or novel, human or AI-driven risks.

 

Contact an expert

tags


Author


Bitdefender Enterprise

Bitdefender is a cybersecurity leader delivering best-in-class threat prevention, detection, and response solutions worldwide. Guardian over millions of consumer, enterprise, and government environments, Bitdefender is one of the industry’s most trusted experts for eliminating threats, protecting privacy, digital identity and data, and enabling cyber resilience. With deep investments in research and development, Bitdefender Labs discovers hundreds of new threats each minute and validates billions of threat queries daily. The company has pioneered breakthrough innovations in antimalware, IoT security, behavioral analytics, and artificial intelligence and its technology is licensed by more than 180 of the world’s most recognized technology brands. Founded in 2001, Bitdefender has customers in 170+ countries with offices around the world.

View all posts

You might also like

Bookmarks


loader