Connect with us

Gadgets

Google, LinkedIn and TikTok Removed 28M+ Nigerian Accounts in 2024 — What That Means for Online Safety

In a major takedown driven by regulator-platform collaboration, tech giants deactivated tens of millions of accounts and removed nearly 59 million items of content in 2024 — a move Nigeria says is aimed at choking off fraud, impersonation and harmful posts. Here’s a clear breakdown of the numbers, why it matters, and what comes next for users and policymakers.

A quick look at what happened

Nigeria’s National Information Technology Development Agency (NITDA) revealed that Google, LinkedIn and TikTok removed more than 28 million accounts tied to scams, impersonation and other abuses in 2024. Platforms also pulled down roughly 58.9 million pieces of content that violated policies, although some removed content (about 420,000 items) was later reinstated after appeals.

The scale of the cleanup in real terms

  • 28+ million accounts removed across Google, LinkedIn and TikTok in 2024.
  • Google: ~9.68 million accounts deactivated.
  • LinkedIn: nearly 16 million accounts removed — a figure NITDA flagged as worrying given LinkedIn’s professional focus.
  • Content: 58,909,122 pieces removed; 420,439 of those later reinstated after review or appeal.
  • User reports: Nigerians filed 754,629 complaints to platforms in the same period.

What pushed regulators and platforms to act

NITDA framed the removal campaign as part of a broader effort to reduce online fraud and improve crisis response. False information and impersonation pose direct economic and safety risks — from social-engineering scams to misinformation that sparks panic or financial loss. The agency said platform cooperation has improved communication channels and supports Nigeria’s data protection and digital safety frameworks.

The surprise hidden in LinkedIn’s data

LinkedIn’s removal of nearly 16 million accounts alarmed NITDA leadership because LinkedIn is typically a professional network. The high figure suggests scammers are increasingly abusing even professional platforms for impersonation and social-engineering attacks — a reminder that no single app is immune.

What’s really going on beneath the surface

A huge takedown doesn’t automatically mean safer spaces

Removing millions of accounts is a blunt but visible metric. It can reduce scammer capacity quickly, but it’s not a silver bullet. Bad actors reappear on new accounts, use encrypted apps, or move to peer-to-peer marketplaces. Effective long-term reduction in fraud requires better detection, cross-platform data sharing, and partnership with financial institutions to cut the payout rails.

Why the number of reinstated posts actually matters

About 420,000 removed posts were reinstated after review — a modest but meaningful share. This underscores two realities: automated moderation systems have false positives, and governments and platforms must balance rapid takedowns with fair, transparent appeals processes to protect legitimate speech and avoid censorship or mistakes.

How this affects everyday users, companies, and policymakers

  • Individuals: Stay vigilant about impersonation and double-check profiles before sharing sensitive info. Use platform safety tools and report suspicious accounts.
  • Businesses: Strengthen social engineering awareness training for staff and verify recruitment or vendor outreach through independent channels.
  • Policymakers: Prioritise cross-border cooperation, support transparent moderation appeals, and invest in digital literacy campaigns so citizens can better spot scams.

What regulators and platforms should learn from this episode

NITDA pointed to closer ties with platforms and Nigeria’s Data Protection Regulation as key to progress. Still, the episode spotlights governance challenges:

  1. Speed vs. accuracy: Platforms must act quickly during crises, but automated takedowns require robust appeal channels to correct mistakes.
  2. Cross-platform intelligence: Fraud networks operate across services — information-sharing frameworks (with privacy safeguards) could improve detection.
  3. Measurement of success: Beyond takedown counts, regulators should track fraud reduction, user harm prevented, and re-offense rates to measure real impact.

What the next few months could reveal

  • Whether platforms publish more granular transparency reports (detailing enforcement categories, detection methods, and reinstatement reasons).
  • Policy updates from NITDA or the Nigerian Data Protection Commission around moderation standards and cross-border data cooperation.
  • How private-sector partners (banks, telcos) and platforms coordinate to disrupt the financial incentives behind fraud.

The big picture

The mass removals are a sign that platforms and regulators are taking online harm seriously — but the scale of the problem means takedowns are only one part of the solution. Sustainable progress will require better detection, faster and fairer appeals, stronger cross-sector collaboration, and a big push in digital literacy.

Question for readers: Do you think mass account removals are an effective way to fight online fraud, or should governments and platforms invest more in prevention and user education? Share your thoughts in the comments.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Copyright © 2022 Inventrium Magazine