Hackers are using generative AI to target people more effectively and cheaply than ever before. While you may be confident in your ability to detect malicious attacks, now’s a great time to brush up on the latest tactics they use to exploit people.
How Do Hackers Use AI to Pick Their Targets?
Hackers rely on stolen social media profiles to scam people. To steal online identities, they often create fake profiles that mimic real users or hijack actual accounts to exploit trust and manipulate victims.
AI-powered bots can help scrape social media for photos, bios, and posts to generate convincing cloned accounts. Once a scammer builds a fake profile, they send friend requests to the victim’s contacts, tricking them into thinking they’re interacting with someone they know.
A real account opens more doors to using AI for more effective, unique, targeted scams. AI-powered bots can identify close relationships, reveal hidden information, and analyze past conversations. From there, AI chatbots can take over and start chatting with them by imitating speech patterns and pushing scams like phishing links, fake emergencies, financial requests, or to share sensitive information.
For example, I’m sure you’ve seen a friend’s Facebook account compromised and used to post phishing links on their Facebook feed. That’s just one part of an account takeover. Once compromised, a scammer can use AI tools to message everyone in the scam account’s contacts, hoping to snare more victims.
Due to these developments, many web services use complex, hard-to-solve CAPTCHAs, mandatory two-factor authentication, and more sensitive behavior tracking systems. But even with these extra defenses, people are always the biggest vulnerability.
Types of AI-Powered Scams You Should Know
Cybercriminals use multi-modal AI to create bots or spoof high-profile individuals or groups. Though many AI-powered scams use conventional social engineering techniques, the use of AI enhances their effectiveness and makes them harder to spot.
AI Phishing and Smishing Attacks
Phishing and smishing attacks have always been a staple among scammers. These attacks work by imitating well-known companies, government agencies, and online services to steal your credentials and log in to your accounts. Though widespread, phishing and smishing attacks can be easy to spot. Scammers often need to play the numbers game to get any beneficial results.
In contrast, spear-phishing attacks are far more effective. These require attackers to conduct research and reconnaissance, crafting highly personalized emails and texts to scam people. However, spear-phishing attempts are rare in our inboxes because they demand significant effort to execute successfully.
This is where AI becomes dangerous. With AI chatbots and other AI tools, cybercriminals can automate mass spear-phishing attacks without spending significant resources or time on funding the campaign. Deepfake videos of important individuals may even be used to supplement the attack and make the bait more effective. In one instance, YouTube had to warn creators about phishing scams involving a deepfaked video of its CEO, designed to trick them into revealing their login credentials.
Romance Scams
Romance scams manipulate emotions to gain trust and affection before exploiting victims. Unlike regular phishing scams, where social engineering ends once you surrender your credentials, romance scams require scammers to spend weeks, months, or even years building relationships—a tactic known as pig butchering. Due to this significant time investment, cybercriminals can only target a few people at a time, making these scams even rarer than manual spear-phishing attacks.
However, scammers today can use AI chatbots to handle some of the most time-consuming aspects of romance scams—chatting, texting, sending pictures and videos, and even making live phone calls. Since targets are often emotionally vulnerable, they may even subconsciously excuse AI-generated conversations as quirky or even charming.
The Scottish Sun covered an incident where a neuroscientist lost thousands of pounds on an AI-powered romance scam. The fraudster employed AI-generated videos and messages to convincingly portray a romantic interest. They fabricated an elaborate story about working on an offshore oil rig and used deepfake technology to convince the victim of the legitimacy of all their claims. This use of AI tools warns us of the evolving tactics scammers utilize to exploit victims.
AI-Enhanced Customer Support Scams
Customer support scams exploit people’s trust in major brands by impersonating help desks. These scams work by sending fake alerts, pop-ups, or emails claiming that your account has been locked, needs verification, or has an urgent security issue. Traditionally, scammers had to interact with victims manually, but AI chatbots have changed that.
AI-powered customer support scams now use chatbots to automate conversations and make them feel more convincing. With automation tools like n8n, chatbots can respond in real time, mimic official support agents, and even reference knowledge bases to appear more legitimate. They often implement phishing tactics by using cloned websites to trick victims into entering their credentials.
AI support scams can also go the other way around. Scammers may use AI agents to contact important services such as banks and government programs to get a target’s data or even reset their login credentials.
Automated Misinformation and Smear Campaigns
Hackers are now using AI chatbots to spread misinformation at an unprecedented scale. These bots generate and share false narratives across social media, targeting news feeds, community forums, and comment sections with fabricated comments. Unlike traditional misinformation campaigns that require manual effort, AI agents can now automate the entire process, making fake news spread faster and more convincingly.
By automating the creation of real social media accounts, bots can craft and interact with posts to spread misinformation. And with enough of these bots circulating on the internet, they can turn those uninformed or undecided to side with their narrative.
Beyond simple deception, hackers also use AI misinformation campaigns to drive traffic to scam websites. Some mix fake news with fraudulent offers, tricking victims into clicking malicious links. Because these posts often go viral before fact-checkers can respond, many people unknowingly spread the disinformation further.
What Can I Do to Protect Myself?
Though hackers are using AI in all kinds of tasks, they have found the most use in enhancing social engineering attacks. So, to defend against most AI-powered scams, we have to put more effort into securing our privacy and verifying the legitimacy of messages, posts, and profiles.
- Limit Personal Information Sharing: Avoid being targeted in the first place. Think before sharing personal details on social media. Scammers use this info to craft targeted attacks.
- Be Skeptical of Unsolicited Communications: If you get a sudden call, message, or email from someone you don’t know, verify with your friends, family, colleagues, or anyone on your network before responding.
- Beware of Deepfakes: AI-generated deepfakes can mimic voices and appearances. Be cautious of unexpected video calls and messages from high-profile entities. Always check for verification badges, follower/subscriber count, and the accounts interacting with their posts.
- Think Before You Click: Phishing links that look like normal posts are still rampant on social media. Does the play button look flat or edited? Does the post look confusing? Is it supposed to look like a video but also an image and external link at the same time? Better not engage with those types of posts.
- Check and Verify News Post: Whether you want to avoid being misled by scammers or stay informed, always cross-check information from multiple sources. Also, check the comment sections—bot accounts often have usernames with a combination of numbers at the end to ensure username availability during creation.
AI chatbots offer convenience but also empower hackers with advanced scamming tools. But remember, many AI-assisted scams still follow the same patterns as traditional scams. They’re jThey’reder to spot and more widespread. By staying informed and verifying all online interactions, you defend yourself against these evolving threats.