
In late 2024, internal analysts at Meta reached a stark conclusion: more than one in ten dollars the company expected to earn from advertising that year could be linked to scams or other prohibited promotions, a sum estimated at about 16 billion dollars. Those figures, buried in internal presentations, suggested that fraudulent and policy‑breaking advertisers were not outliers but a significant part of the business. Once the documents surfaced through leaks and investigations, they triggered a wave of scrutiny from consumer advocates, journalists, regulators, and lawmakers on both sides of the Atlantic.
Meta’s Hidden Risk: Internal Projections And Guardrails

Leaked documents from 2023 and 2024, reviewed by Reuters and consumer group Which?, indicated that Meta’s teams projected roughly 10.1% of its 2024 ad revenue would come from scam or banned‑goods advertising. The internal roadmap aimed to reduce that share only gradually, to about 5.8% by 2027, reflecting a phased approach rather than an immediate crackdown.
Analysts also assessed how fraudsters were exploiting Meta’s powerful targeting tools, noting that users were exposed to billions of “higher‑risk” ads every day. These included fake investment schemes, counterfeit goods, illegal gambling offers, and unlicensed medical products. In at least one internal review, staff concluded it was “easier to advertise scams” on Meta than on some rival platforms, strengthening arguments that the company’s safeguards were comparatively weak.
The documents described “revenue guardrails” that shaped enforcement. Trust and safety teams reportedly worked under informal limits on how much revenue their actions could jeopardise—around 0.15% of overall income before senior approval was required. Large‑scale removals of paying advertisers could thus be slowed or scaled back when models predicted significant financial impact.
The enforcement model itself relied heavily on automated probability scores. Meta’s systems only triggered automatic bans when machine‑learning tools estimated a fraud likelihood above roughly 95%. Advertisers falling below this threshold, even when highly suspicious, often remained active but faced higher prices per impression. Critics argued that this structure effectively turned fraud into a risk‑priced segment of the ad business: dubious advertisers could keep operating as long as they accepted steeper costs.
Scam Brands Behind Polished Fronts

Parallel investigations from the BBC and Which? put names and faces to the abstract percentages. Reporters found a cluster of online shops advertising heavily on Facebook and Instagram while presenting themselves as quaint British brands. Names such as “C’est La Vie,” “Mabel & Daisy,” “Chester & Clare,” “Harrison & Hayes,” and “Omelia & Oliver Jewels” appeared in feeds with carefully staged lifestyle imagery and AI‑generated photos suggesting roots in UK cities like Birmingham and Bristol.
Their websites featured sentimental origin stories, but shipping records and company details pointed instead to China or Hong Kong. Trustpilot pages for several of these brands were filled with one‑star reviews alleging misleading claims, cheap or defective goods, and orders that never arrived. Customers described buying what they believed were elegant jewellery pieces or classic clothing, then receiving flimsy items that bore little resemblance to the ads—or nothing at all.
Many buyers struggled to get refunds. Contact emails often went unanswered, return policies proved difficult to enforce, and some storefronts vanished within weeks. Yet ads for similar brands continued to appear across Meta’s platforms. Shoppers, seeing promotions on well‑known services like Facebook and Instagram, frequently assumed that some level of vetting had taken place.
From User Complaints To Public Pressure

Consumer advocates say that individual complaints and negative reviews did not, by themselves, prompt swift removal of the problematic shops. Which? and the BBC both reported that several highlighted storefronts continued to operate despite mounting evidence of fraud and direct reports to Meta. Only after the BBC broadcast its findings did Meta remove six of the fake brands identified by reporters.
For watchdog organisations, that sequence reinforced a perception that decisive action tended to follow media exposure rather than early warning signals from users. Which? described scam advertising on Meta’s platforms as having been allowed to “run rampant” for years, while Consumer Reports in the United States argued that Meta had “willfully ignored” the scale of the problem as its recommendation systems boosted deceptive campaigns. Both groups urged regulators to treat fraudulent advertising as a structural risk connected to the company’s profit model, not an unfortunate side effect.
In Washington, US Senators Richard Blumenthal and Josh Hawley sent a joint letter pressing the Federal Trade Commission and the Securities and Exchange Commission to open formal investigations. They called for possible disgorgement of profits, substantial civil penalties, and, if warranted, personal accountability for executives, arguing that if Meta could internally estimate revenue from scams, it could not plausibly claim ignorance.
Meta Pushes Back While Citing Progress

Meta has sharply disputed characterisations of a “16 billion dollar scam ad empire.” Company spokesperson Andy Stone said the leaked internal figures were “rough” and “overly inclusive,” asserting that they grouped together various categories of policy‑violating ads, not all of which counted as scams. Meta, however, has not published audited alternative estimates of how much of its revenue is specifically tied to fraudulent or illegal goods advertising.
In public statements, the company has emphasised recent enforcement efforts. It says user reports of scam ads have fallen by more than 50% over the past 18 months and that it removed or blocked over 134 million scam‑related ad pieces in 2025. Meta points to specialised detection teams and evolving machine‑learning tools that it claims now shut down emerging fraud patterns faster than before. At the same time, the company has warned in regulatory filings that tougher measures against illicit advertising could “adversely affect” revenue and might have a “material” impact as policies tighten.
Regulatory Turning Point In Europe And Beyond
European lawmakers have taken a more direct approach to changing the incentives facing large platforms. A new Payment Services Regulation agreed in November 2025 allows banks to seek cost‑sharing when their customers are defrauded through advertising that remained online after platforms were alerted. Under the framework, financial advertisers must prove they are properly authorised, and platforms such as Meta can be held partly liable if they fail to act on warnings about fraudulent campaigns.
If enforcement is robust, this model could transform scam ads from a profitable nuisance into a significant financial liability. Each fraudulent promotion that persists after notice could carry direct costs, alongside reputational damage and additional regulatory exposure. The goal, according to EU policymakers, is to encourage platforms to deploy their full technical capabilities against fraud, rather than managing it within revenue‑based limits.
Across the wider digital advertising sector, the Meta case is being watched as a potential inflection point. For years, major platforms have leaned on voluntary transparency measures and partnerships with law enforcement. Now, regulators appear increasingly inclined to impose binding rules and shared liability. With generative AI making it easier to create convincing fake brands, identities, and visuals at scale, the stakes extend beyond a single company: they touch on whether large platforms can sustain user trust while continuing to monetise attention in an environment where deception is cheaper and more sophisticated than ever.
Sources
Reuters investigation: “Meta is earning a fortune on a deluge of fraudulent ads” (November 2025)
Which? analysis: “Leaked Meta documents predicted 10% of its revenue came from scam ads in 2024” (November 2025)
BBC investigation: “Meta accused of letting AI sellers ‘run rampant’” (November 2025)
European Parliament and Council: Payment Services Regulation agreement on platform liability for online fraud (November 2025)
US Senators Richard Blumenthal and Josh Hawley: Letter to FTC and SEC regarding Meta scam ad investigation (November 2025)