Scams to Look Out For in 2026: AI, Deepfakes & Emerging Threats
AI-powered fraud is projected to cause $40 billion in losses globally by 2027. As we head into 2026, scammers are weaponising artificial intelligence, deepfake technology, and voice cloning at an unprecedented scale.

Scams are evolving faster than ever. In 2024, Australians lost over $2 billion to scams, and experts predict 2026 will bring even more sophisticated threats. The rise of generative AI has given scammers powerful new tools—from creating convincing deepfake videos to cloning voices with just 3 seconds of audio.
What used to require teams of criminals can now be done by a single person with access to free AI tools. The democratisation of technology means the democratisation of fraud.
This guide covers the 8 most dangerous scam trends predicted for 2026, based on cybersecurity research from Trend Micro , McAfee , TransUnion , and the Australian Cyber Security Centre .
Deepfake scams have increased 1,500% since 2023
Cybersecurity firm Cyble reports that "Deepfake-as-a-Service" platforms are now openly available on underground markets, making it trivially easy for scammers to create convincing fake videos and audio of real people—including family members, CEOs, and government officials.
8 Scams to Watch in 2026
1. AI Voice Cloning Scams
Scammers can now clone anyone's voice with just 3 seconds of audio—often scraped from social media, voicemail greetings, or public videos. McAfee research found these clones achieve 85% voice match accuracy, fooling even family members. 1 in 4 Australian adults have already encountered an AI voice cloning attempt.
How It Works: The scammer obtains a short audio clip from TikTok, Instagram, or YouTube, then AI generates unlimited realistic voice samples. These are used for "Hi Mum" scams, fake kidnapping calls, and CEO fraud where criminals impersonate executives to authorise fraudulent payments.
Warning signs: Urgent calls demanding immediate action, caller refuses video call, unusual speech patterns or pauses, caller discourages you from verifying through normal channels, and requests for unusual payment methods.
2. Deepfake Video Scams
Real-time deepfake technology now enables scammers to impersonate anyone during live video calls. This is being used for romance scams, investment fraud, and business email compromise. ASIC has warned of deepfake videos impersonating finance personalities and news anchors to promote fake investment schemes.
How It Works: Scammers create deepfakes using public photos and videos of their target's "contact," then use real-time face-swapping during video calls. The victim believes they're seeing their real friend, relative, or business partner.
Warning signs: Video quality issues, unnatural facial expressions, audio/video sync problems, refusal to make spontaneous movements when requested, and lighting that doesn't match the environment.
3. Agentic AI Scams (Autonomous AI Agents)
2026 will see the rise of "agentic AI"—autonomous AI systems that can independently plan and execute complex scam operations. These AI agents can manage entire fraud campaigns without human intervention, identifying targets, crafting personalised phishing messages at scale, and managing ongoing conversations that adapt to victim responses.
Trend Micro's 2026 predictions warn that agentic AI will enable "hyper-personalised" attacks that analyse victims' social media, purchasing habits, and communication patterns. A single criminal can now run thousands of simultaneous, personalised scam conversations.
Warning signs: Messages that reference very specific personal details, sophisticated conversations that adapt to your responses, coordinated contact across multiple platforms, and patterns that suggest automated but personalised communication.
4. Smart Home Hijacking
As Australian homes become more connected with smart speakers, doorbells, cameras, and thermostats, scammers are developing new ways to exploit IoT vulnerabilities for blackmail, theft, and invasion of privacy. Security researchers predict this will be a major attack vector in 2026.
How It Works: Criminals exploit weak passwords on smart devices, intercept doorbell cameras to monitor when you're away, use smart speaker access to eavesdrop or make fraudulent purchases, and lock victims out of their own systems for ransom.
Warning signs: Devices behaving unexpectedly, unfamiliar devices appearing on your network, smart home commands you didn't give, unexpected purchases on linked accounts, and devices activating at unusual times.
5. Synthetic Identity Fraud
Synthetic identity fraud combines real stolen information (like a legitimate Tax File Number or Medicare number) with fabricated details to create entirely new "people" that exist only on paper—but can open bank accounts, apply for credit, and claim benefits. TransUnion reports this is the fastest-growing financial crime globally, with losses expected to exceed $5 billion annually.
How It Works: Scammers obtain real Australian identity data from data breaches, combine real TFN/DOB with fake name and address, use AI to generate realistic supporting documents, then build credit history over months before "busting out" with maximum credit.
Warning signs: Unexpected credit inquiries on your credit report, unfamiliar accounts appearing, unexplained changes to your credit score, correspondence from companies you've never dealt with, and tax returns rejected because "you've already filed."
6. Multi-Channel "Platform Hopping" Scams
Scammers are increasingly starting contact on one platform (Facebook, LinkedIn, dating apps) and quickly moving victims to others (WhatsApp, Telegram, phone) to evade platform safety measures. Each platform switch creates cognitive burden and reduces the victim's ability to verify legitimacy.
Why It's Effective: By the time money is sent, communication spans 3-4 different platforms, making fraud harder to track and report. Victims lose the safety protections built into regulated platforms, and the fragmented evidence makes prosecution nearly impossible.
Warning signs: Quick pressure to move communication off initial platform, excuses for why they "can't" use regulated platforms, requests to download new messaging apps, and reluctance to communicate through channels with identity verification.
7. AI-Powered Investment Scams
Cryptocurrency and investment scams are being supercharged by AI-generated websites, fake trading platforms with realistic-looking dashboards, and AI chatbots that provide 24/7 "support" to victims. Over 7,227 fake investment platforms were removed in 2024. Experts predict AI will enable this number to double in 2026.
How It Works: AI generates professional-looking investment platforms in hours, complete with deepfake testimonials from "successful investors." AI chatbots handle victim inquiries around the clock, and fake dashboards show "growing" investments. The platform disappears once significant funds are deposited.
Warning signs: Unsolicited investment opportunities, guaranteed high returns, pressure to invest quickly, cryptocurrency-only payments, inability to withdraw "profits," and platforms not registered with ASIC . Learn more in our Investment & Ponzi Schemes guide.
8. DeFi and Crypto Rug Pulls 2.0
Decentralised Finance (DeFi) "rug pulls"—where developers abandon projects after collecting investor funds—are becoming more sophisticated. DeFi rug pulls caused $6 billion in losses in early 2025 alone, and the trend is accelerating with AI-generated whitepapers, fake team identities, and coordinated social media hype.
How It Works: AI generates legitimate-looking whitepapers and documentation, deepfake team photos and LinkedIn profiles are created, coordinated bot campaigns create artificial hype, and smart contracts contain hidden withdrawal functions allowing developers to drain all funds.
Warning signs: Anonymous team members, too-good-to-be-true returns, heavy social media hype with bot-like accounts, locked liquidity that can be unlocked by developers, rushed launches, and no code audits from reputable firms. Read our cryptocurrency scam prevention guide.
How to Protect Yourself from AI-Powered Scams
SafeAus uses advanced AI detection to protect you from emerging threats.
As scammers adopt AI, so do we. SafeAus detects AI-generated content patterns in scam messages, identifies known scam phone numbers from our Australian database, analyses URLs for phishing indicators and fake investment sites, and provides instant verification—all without storing your personal data.
Establish family safe words
Create a secret code word with family members to verify identity during unexpected calls. If someone claims to be your child or parent in an emergency, ask for the safe word. AI can clone voices but it can't know your secret family code. This simple step defeats most voice cloning scams instantly.
Verify through separate channels
If you receive an urgent call or message, hang up and contact the person through a number you already have. Don't use any contact details provided by the caller. If someone claims your bank account is compromised, call the number on the back of your card—not the number they provide.
Be suspicious of urgency
AI scams rely on pressure tactics to bypass your critical thinking. Take time to verify any unexpected request for money or information. Legitimate emergencies can be verified. Scams cannot withstand scrutiny, which is why criminals create artificial time pressure.
Limit public voice and video
Consider who can access your voice recordings and videos online. 3 seconds of audio is enough to clone your voice. Review privacy settings on social media and consider whether public voicemail greetings are necessary. For high-profile individuals, this is especially critical.
Enable multi-factor authentication
Protect all accounts with MFA to prevent unauthorised access. Use authenticator apps rather than SMS-based MFA when possible, as SMS codes can be intercepted through SIM swapping attacks. This adds a critical second layer of protection. Learn more in our account security guide.
Check your credit report regularly
Monitor for synthetic identity fraud by checking your credit report (free via Equifax, Experian, or illion). Set up credit monitoring alerts to be notified of new accounts or inquiries. If your identity has been stolen, you'll catch it before criminals can cause maximum damage.
Secure your smart home devices
Keep IoT devices updated with the latest firmware and use strong, unique passwords for each device. Change default passwords immediately upon installation. Consider a separate network for smart home devices and regularly audit what devices are connected to your home network.
Stay sceptical of platform switches
Be cautious when asked to move conversations to different apps or platforms. Legitimate contacts don't need to switch platforms urgently. If someone you've just met online insists on moving to WhatsApp or Telegram immediately, consider why they want to leave a platform with better fraud protection.
What to Do If You've Been Scammed
If you suspect you've fallen victim to an AI-powered scam, immediate action is critical. Time is your most valuable resource in limiting damage.
Stop all communication immediately
Cease all contact with the scammer. Do not respond to further messages, even if they become threatening or promise to return money. Block their numbers and accounts across all platforms. Any further engagement only gives them more opportunities to manipulate you.
Contact your bank immediately
Call the number on the back of your card and report unauthorised transactions. The faster you report, the better your chances of preventing or reversing fraudulent charges. Many banks can freeze accounts and reverse transactions if contacted within hours of the fraud.
Report to authorities
Report to Scamwatch and ReportCyber . Your report helps authorities track AI scam patterns and warn other Australians. For detailed recovery steps, see our guide on what to do if you've been scammed.
Contact IDCARE for identity theft support
Call 1800 595 160 if personal information was compromised. IDCARE provides free specialist assistance for identity crime victims and creates personalised response plans. They can help place alerts on your credit file and monitor for misuse of your identity.
Preserve evidence
Screenshot all conversations, save any files received, and document the timeline of events. This evidence may be crucial for bank disputes, insurance claims, or potential prosecution. Note any phone numbers, email addresses, website URLs, and cryptocurrency wallet addresses used by the scammer.
The Future of Scam Protection
The scam landscape in 2026 will be defined by artificial intelligence—both as a weapon for criminals and a shield for consumers. While the threats are more sophisticated than ever, awareness remains your strongest defence.
By understanding how deepfakes, voice cloning, and AI agents work, you can recognise the warning signs before falling victim. Stay informed, verify everything through trusted channels, and use tools like SafeAus to help identify suspicious contacts.
The criminals have AI. So should your protection. For more information and the latest scam alerts, visit Scamwatch.gov.au and follow the National Anti-Scam Centre .