Meta deletes 10 million fake accounts



Wednesday, July 16, 2025 - Meta, the parent company of Facebook, has intensified its crackdown on fake accounts and spam, announcing it removed over 10 million fake profiles and roughly 500,000 spam accounts in the first half of 2025.

The sweeping purge is part of Meta’s broader effort to combat impersonation, fake engagement, and content duplication, aiming to elevate authentic creators and improve the quality of content across its platforms.

In a blog post, Meta said:

“We’re making progress. In the first half of 2025, we took action on around 500,000 accounts engaged in spammy behaviour or fake engagement. We also removed about 10 million profiles impersonating large content producers.”

Meta stressed that accounts which primarily repost or recycle content without meaningful edits will face penalties such as reduced reach and the loss of monetisation tools.
The company also warned that repeatedly sharing unoriginal content — whether videos, photos, or text — undermines the platform’s integrity by crowding out genuine voices and making it harder for new creators to grow.

To support authentic creators, Meta is rolling out new tools that automatically trace reposted content back to its original source. The company says this will help ensure rightful credit and give higher visibility to original posts.

“Pages and profiles that post mostly original content tend to enjoy wider distribution across Facebook. Simply stitching clips together or adding a watermark will no longer count as meaningful editing. Content that provides real value and tells an authentic story is likely to perform better,” Meta explained.

Creators are also being cautioned against uploading content that includes watermarks from other platforms. Such posts could see their reach restricted or lose monetisation privileges altogether.

As part of its latest update, Meta introduced post-level insights on the Professional Dashboard, allowing creators to monitor how individual posts perform. They can also check their Support Home screen to see if their content or earnings are facing restrictions.

In a parallel development, Google’s YouTube updated its monetisation guidelines, stating that content deemed mass-produced or excessively repetitive will no longer qualify for ad revenue. The announcement initially sparked concern among creators, who feared it was a blanket ban on AI-generated content. YouTube later clarified:

“We welcome creators using AI tools to enhance their storytelling, and channels that use AI in their content remain eligible to monetise.”

Both tech giants say these new policies are aimed at raising content standards and safeguarding genuine creators in a crowded and rapidly evolving digital landscape.

Post a Comment

0 Comments