The Year We Stopped Trusting Our Eyes

AI Scams

The Year We Stopped Trusting 

Our Eyes

Why AI-powered deception is the defining consumer threat of 2026 

and what’s being done about it

The New Normal

Three years ago, an online scam still came with tells. The grammar was off. The logo was stretched. The “Nigerian prince” had a pixelated cousin in your inbox who couldn’t quite spell “urgent.” If you slowed down, you could spot it.

That window has closed.

In April 2026, the U.S. Department of Justice announced the coordinated takedown of nine scam centers and at least 276 arrests across Dubai, Thailand, and the U.S., charging operators of so-called “pig-butchering” rings that had drained millions from American victims. The defendants weren’t lone hackers in basements. 

They ran companies — “Ko Thet,” “Sanduo Group,” “Giant Company” — with managers, recruiters, scripts, and AI tooling that let a handful of people operate at the scale of a call centre staffed by hundreds.

According to TIME’s reporting in early April, what’s emerging is what one United Nations investigator called a “perfect storm”: cheap generative AI, advanced malware, and a sluggish global economy combining to produce industrial-scale fraud that’s “much more sophisticated than it was three years ago.” The same syndicates that ran illicit Chinese gambling rings before COVID converted shuttered casinos along the Mekong into compounds, then upgraded the technology. Now they’re reinvesting profits into better AI, faster malware, and broader targeting.

The scale numbers are sobering. The Global Anti-Scam Alliance found that 57% of adults globally were victims of a scam in the past year, with 23% losing money. Javelin Strategy & Research pegged combined U.S. identity fraud and scam losses at $38 billion in 2025 — and that figure was actually down from 2024, not because the threat is shrinking but because the nature of fraud is shifting toward harder-to-quantify trust erosion. The FBI’s IC3 logged $16.6 billion in cybercrime losses in 2024 alone, up 33% year-over-year.

And the volume of synthetic content fuelling all this? Cybersecurity firm DeepStrike estimates online deepfakes jumped from roughly 500,000 in 2023 to about 8 million in 2025. A 16x increase in two years.

Here’s the part that matters for everyone reading this: 74% of people surveyed said they were confident they could spot a scam. Most of them are wrong. The signals we all learned to look for — typos, weird URLs, strange phrasing — have been engineered out of the product.

What This Actually Looks Like in Your Inbox

The textbook examples from the last 12 months tell the story better than any statistic.

The fake recruiter. UK cybercrime reports of recruitment scams more than doubled between 2022 and 2024. Lloyds Banking Group logged a 237% rise in job scams in just eight months of 2025. Monzo says more than 10,000 of its customers fell victim to recruitment scams last year alone. The playbook: an AI-cloned LinkedIn profile of a real recruiter, outreach personalized using the target’s own CV, and a pitch polished enough that even sceptical professionals are biting. One Guardian reporter wrote about being fooled herself — during maternity leave, when she was tired and not paying close attention. She lost an hour and her pride. Others have lost their savings.

The fake candidate. The flip side is just as ugly. Background-screening firm Checkr says 23% of companies have already encountered identity fraud among new hires. Gartner is forecasting that by 2028, one in four job candidate profiles globally will be fake. Amazon disclosed in late 2025 that it had blocked more than 1,800 suspected North Korean state-affiliated applicants since April 2024 — with attempts growing 27% quarter over quarter. Ferrari was targeted by a deepfake WhatsApp impersonation of its CEO. Voice security firm Pindrop interviewed a candidate whose facial expressions were slightly out of sync with his voice and traced the IP address halfway around the world.

Pushpaganda. This month, threat intelligence team HUMAN unmasked a campaign that hijacked Google’s Discover feed using AI-generated fake news stories to drive users to scareware sites that pushed browser notifications and ad fraud. At its peak, 240 million bid requests across 113 domains in seven days. Google has since patched it. Others will be coming.

Pig-butchering at scale. The DOJ’s San Diego indictments last week describe scammers using fake romance and friendship — cultivated over weeks or months — to push victims into fake crypto investment platforms, encouraging them to borrow from family and take out loans. The scammers showed off their own “returns.” Once victims transferred funds, the money was gone. Operation Level Up, an FBI initiative that started in 2024, has notified almost 9,000 victims and saved an estimated $562 million as of April 2026. That’s just one program in one country.

The common thread: AI didn’t invent any of these scams. It just made them cheaper, faster, more personalized, and far harder to detect. As one fraud expert put it, detection is now a behavioural problem, not a grammar problem.

What Private Industry is Doing

This is where the most interesting work is happening, because the businesses most exposed to AI fraud — banks, hiring platforms, social media companies, payment processors — have economic incentives to solve it that regulators don’t.

A short list of what’s emerging:

  • Real-time deepfake detection built into video conferencing and hiring platforms, designed to flag lip-sync inconsistencies, lighting anomalies, and voice-face desynchronization in live calls.
  • Behavioural biometrics that analyze how a user types, moves a mouse, or holds a phone — patterns that are extraordinarily hard to fake even with advanced AI.
  • Privacy-preserving identity verification that lets a person prove they’re real, of legal age, and who they say they are without handing over a stack of personal documents to every counterparty. This is the space where solutions like cryptographic age and identity verification are starting to scale.
  • Content provenance and watermarking standards (C2PA, content credentials) that let cameras, editing software, and AI generators tag content with verifiable metadata about how it was created.
  • Cross-platform fraud signal sharing between banks, telcos, and platforms — quietly, because of the regulatory complications, but increasingly aggressively.

What links these is a recognition that detection alone won’t win. Generation quality is improving faster than detection tools can keep up. Gartner predicts that by 2026, 30% of enterprises will find standalone identity verification unreliable in isolation. The industry is converging on a layered model — detection plus behavioural analytics plus identity verification plus provenance — because no single layer is going to be enough on its own.

What We Are Looking At

Hydaway Digital is a North Vancouver–based GPU compute and AI detection company that, in the span of roughly four months, has gone from a single-client GPU rental pilot to a multi-tenant SaaS platform with a deepfake detection product live in production. 

At a moment when both compute scarcity and synthetic-media fraud are hitting their structural inflection points. Hydaway deserves a serious look.

Company: 

Tickers:

Headquarters:

Sector/Industry:

Recent Share price:

52 week range:

Market Cap:

Shares Outstanding:

1 year return:

Beta:

Recent Financing:

Hydaway Digital

TSXV:HIDE – OTCQB:HIDDF – FSE:C88

North Vancouver, British Columbia

Technology/AI Infrastructure & Software

C$0.60

C$0.21-C$1.03

C$72.09 million

35.93 million

+604%

-1.65

C$1.2M private placement at C$0.25 (Feb 2026); units include 1-yr warrants at C$0.40 with $0.75 acceleration trigger

What Hydaway has actually done — the news flow

This isn’t a story about a company “exploring opportunities.” Hydaway has delivered a string of concrete operational milestones in roughly 16 weeks:

  • February 4, 2026 — Closed the acquisition of RealityChek, a multi-modal AI detection and content verification platform with blockchain-anchored authentication. Consideration: 6,000,000 shares at a deemed price of $0.14 (total $840,000), plus up to 1.86 million milestone shares tied to platform performance.
  • February 5, 2026 — Closed a $1.2 million private placement at $0.25 per unit (each unit including a one-year warrant at $0.40, with acceleration if shares trade above $0.75 for five consecutive days).
  • March 9, 2026 — Completed the technical integration of RealityChek’s AI detection models onto Hydaway’s GPU infrastructure, materially accelerating RealityChek’s ability to train and deploy detection models.
  • March 16, 2026 — Transitioned the GPU compute platform from a single-client pilot to a multi-tenant SaaS model with three tiers (Starter, Growth, Enterprise). API-first deployment, real-time telemetry, dedicated SLAs, predictable subscription revenue.
  • March 18, 2026 — Launched DETECT by RealityChek, a public-facing AI image and URL detection product live at detect.realitychek.com, powered by Hydaway’s GPUs.
  • April 23, 2026 — Integrated Cardlogx’s AI-powered card detection and image analysis system onto Hydaway’s GPU infrastructure. Cardlogx serves the $100 billion trading card industry with eBay/Shopify sales sync, scanning, pricing, inventory, and analytics. CEO Karl Kottmeier explicitly framed this as a “clear template for onboarding enterprise clients with demanding real-time workloads,” with focus shifting to “commercial expansion.”

That’s a coherent operational arc: build the compute, close an acquisition that creates a software flywheel, integrate, productize, then onboard enterprise clients. Each step de-risks the next. And the most recent step — the Cardlogx onboarding — demonstrates that the multi-tenant SaaS architecture works as advertised.

Two AI tailwinds, one company

Most small-cap AI plays are exposed to one side of the AI economy: either the picks-and-shovels (compute, data centers) or the application layer (specific AI products). Hydaway is exposed to both through a single integrated stack. The GPU rental business sells compute capacity into the broader AI buildout. RealityChek consumes that compute to deliver a high-margin SaaS detection product — and is itself a customer of Hydaway’s compute. Every dollar of detection revenue effectively monetizes Hydaway’s GPUs twice: once at the infrastructure layer, once at the application layer.


Check out Hydaway hereVisit our site!

Sources:

Section I — The new normal

DOJ takedown: 276 arrests, nine scam centers, Dubai/Thailand/China/U.S./Meta cooperation, Ko Thet/Sanduo/Giant companies, April 29, 2026 announcement

“Perfect storm” framing; UN investigator quote; Mekong compounds; AI/malware/sluggish-economy combination

Global scam victimization stats: 57% of adults victims, 23% lost money, 73% confident, 46,000-respondent survey across 42 countries

$38 billion combined U.S. identity fraud + scam losses in 2025; “down from 2024” but masking worsening risk

FBI IC3: $16.6 billion in cybercrime losses in 2024, +33% YoY

  • FBI Internet Crime Complaint Center (IC3)2024 Internet Crime Report. https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf
  • Note: At the time of writing, the FBI had also referenced a separate $20+ billion cybercrime loss figure for 2025, cited in Cybernews coverage. If you want to use the more recent number, that’s available — let me know.

Online deepfakes: 500K in 2023 → 8M in 2025 (16x growth)

  • DeepStrike cybersecurity estimates, cited across multiple secondary sources including industry reports compiled in: https://bayelsawatch.com/deepfake-statistics/
  • Caveat: This is widely cited but the original DeepStrike methodology is not always linked. For a fact-checked publication, you may want to sub in the Mordor Intelligence “Fake Image Detection Market” data — which puts the market at $1.42B in 2025 growing to $7.43B by 2031 (31.73% CAGR) — as a more rigorously sourced anchor for “the volume of synthetic content is exploding.” https://www.mordorintelligence.com/industry-reports/fake-image-detection-market

Section II — What this looks like in your inbox

UK recruitment scams more than doubled (2022–2024); Lloyds 237% rise (Jan–Aug 2025); Monzo 10,000+ victims

Checkr: 23% of companies have encountered identity fraud among new hires

  • Checkr, employer survey data (cited in Guardian and other 2025–2026 hiring-fraud coverage).

Gartner: by 2028, 1 in 4 job candidate profiles will be fake

  • Gartner, public forecast widely cited in HR and security press through 2025–2026. Original Gartner research note: “Predicts 2025: How AI Will Reshape Identity and Hiring.”

Amazon: 1,800+ suspected DPRK applicants blocked since April 2024; 27% QoQ growth

Ferrari deepfake WhatsApp CEO impersonation

  • Widely reported in 2024; original coverage in Bloomberg and The Guardian. (Note: this is a frequently cited “exemplar” anecdote — not central to the data argument but useful as illustration.)

Pindrop voice/face desync candidate detection anecdote

  • Pindrop company commentary; cited in 2026 voice-biometrics and hiring-fraud coverage.

Pushpaganda campaign: HUMAN Satori, 240M bid requests, 113 domains, seven-day peak, Google Discover feed exploitation, AI-generated articles, expansion from India to US/UK/Canada/Australia/South Africa

Operation Level Up: ~9,000 victims notified, $562 million saved (as of April 2026)

  • U.S. Department of Justice April 29, 2026 release (same as DOJ takedown source above).
  • FBI Operation Level Up program description (initiated January 2024).

Section V — What private industry is doing

Real-time deepfake detection vendors (Pindrop, Reality Defender)

  • Company materials and 2025–2026 industry coverage.

Behavioural biometrics, content provenance (C2PA), cross-platform fraud signal sharing

  • General industry consensus. C2PA (Coalition for Content Provenance and Authenticity)standards: https://c2pa.org

Gartner: by 2026, 30% of enterprises will find standalone identity verification unreliable

  • Gartner, “Predicts 2024: Identity-First Security.” (Widely cited in identity-verification industry press.)

KYC fraud / synthetic identity attempts up 2,137% in three years; deepfakes now 6.5% of all fraud

  • Mordor Intelligence, Fake Image Detection report (cited above).

This source list compiled May 5, 2026. URLs verified as of that date. For any republication, sources should be re-verified — particularly the DOJ release URL and any market-sizing reports that may have been updated.

Disclaimer — Editorial Content

This article concerns Hydaway Digital Corp. (TSXV: HIDE) (OTCQB: HIDDF) (FSE: C88) and is published by NewsAMP Media Corp. (“NewsAMP,” “we,” “us,” or “our”) for general informational and educational purposes only. This is not paid promotional content. NewsAMP has not been compensated by Hydaway Digital Corp., its affiliates, or any third party to produce or distribute this communication. NewsAMP does not hold any shares, options, warrants, or other securities of Hydaway Digital Corp., and has not received securities of any kind as consideration for this content.

Readers should nevertheless be aware of the following relationship and affiliation disclosures: Any common ownership, directors, officers, employees, contractors or principals shared between NewsAMP Media Corp. and Hydaway Digital Corp. or its subsidiaries, including but not limited to RealityChek. One or more principals of NewsAMP Media Corp. also hold roles at Hydaway Digital Corp. and/or its subsidiaries, which represents a material conflict of interest. Readers should treat this article accordingly. These relationships, where they exist, represent a material conflict of interest that may affect the objectivity of the content even in the absence of monetary compensation.

NOT INVESTMENT ADVICE

NewsAMP Media Corp. is not registered or licensed by any governing body in any jurisdiction to provide investment advice or make investment recommendations. NewsAMP is not a registered investment adviser, exempt market dealer, broker-dealer, securities dealer, financial planner, or investment professional. Nothing in this article constitutes, or should be construed as, investment, financial, legal, tax, or accounting advice, or a recommendation to buy, sell, or hold any security or other financial instrument. The information provided is general and impersonal, and is not tailored to any particular reader’s investment objectives, financial situation, risk tolerance, or particular needs.

Always do your own research and consult with a licensed investment professional before making any investment decision. This communication should not be used as the basis for making any investment.

NO OFFER OR SOLICITATION

This communication is not, and should not be construed to be, an offer to sell or a solicitation of an offer to buy any security in any jurisdiction. The content is not directed at, and is not intended for distribution to or use by, any person in any jurisdiction where such distribution or use would be contrary to applicable law.

SOURCE OF INFORMATION

The information in this article is collected from public sources, including but not limited to the profiled company’s website, news releases, continuous disclosure filings on SEDAR+, third-party news outlets, and publicly available market data. While we make reasonable efforts to ensure accuracy at the time of publication, NewsAMP makes no representations or warranties regarding the accuracy, completeness, timeliness, or reliability of any information contained in this article. The information is not independently verified beyond review of the public sources cited, and may become inaccurate or outdated after publication. NewsAMP undertakes no obligation to update this article.

You should assume all information in this article requires independent verification, and you are encouraged to review the company’s continuous disclosure filings on SEDAR+ (https://www.sedarplus.ca) before making any investment decision.

FORWARD-LOOKING STATEMENTS

This article may contain “forward-looking statements” or “forward-looking information” within the meaning of applicable Canadian and U.S. securities laws. These include statements about future events, business plans, financial performance, market conditions, regulatory developments, and similar matters. Forward-looking statements are based on assumptions that may prove incorrect and are subject to known and unknown risks, uncertainties, and other factors that may cause actual results to differ materially from those expressed or implied. NewsAMP undertakes no obligation to update forward-looking statements. Readers should review the relevant issuer’s continuous disclosure filings on SEDAR+ for a full discussion of risk factors.

RISK OF INVESTING

Investing in securities is inherently risky. Investing in small-cap, micro-cap, and early-stage issuers — including securities listed on the TSX Venture Exchange and the Canadian Securities Exchange — involves a high degree of risk, including the risk of total loss of capital. Securities of such issuers may be illiquid, highly volatile, and unsuitable for many investors. Past performance is not indicative of future results. You should only invest money you can afford to lose entirely, and only after consulting a registered investment professional licensed in your jurisdiction.

NO ENDORSEMENT

The publication of this article does not constitute an endorsement of Hydaway Digital Corp. or its securities by NewsAMP Media Corp. Any opinions expressed are those of the author at the time of writing and do not necessarily reflect the views of NewsAMP Media Corp.

INDEMNIFICATION / RELEASE OF LIABILITY

By reading this article, you acknowledge that you have read and understood this disclaimer in full, and you agree and accept that NewsAMP provides no warranty in respect of this communication or the profiled company and accepts no liability whatsoever. To the greatest extent permitted under applicable law, you release and hold harmless NewsAMP Media Corp., its affiliates, assigns, and successors from any and all liability, damages, injury, and adverse consequences arising from your use of or reliance on this communication. You further agree that you are solely responsible for any financial outcome related to or arising from your investment decisions.

INTELLECTUAL PROPERTY

All trademarks used in this communication are the property of their respective trademark holders. Other than NewsAMP.ca and any other trademarks owned by NewsAMP Media Corp., NewsAMP is not affiliated with, connected to, or endorsed by, the trademark holders unless otherwise stated.

GOVERNING LAW

This disclaimer is governed by the laws of the Province of British Columbia and the federal laws of Canada applicable therein. Use of NewsAMP’s services is also subject to our Terms and Conditions (https://newsamp.ca/page/terms-and-conditions) and Privacy Policy (https://newsamp.ca/page/privacy-policy).

CONTACT

Email: info@newsamp.ca

Phone: 778-549-1838

Copyright (C) 2026 NewsAmp Media Corp. All rights reserved.
You are receiving this email because you opted in via our website.

Our mailing address is:
NewsAmp Media Corp 704-595 Howe St PO Box 35 Vancouver, BC V6C 2T5 Canada

View in browser | Update your preferencesUnsubscribe

Leave a Comment