Bumble has rolled out a new feature aimed at combating the use of AI-generated photos and videos on its dating platform. The company announced on Tuesday the introduction of a reporting option that allows users to flag profiles suspected of using AI-generated content. This addition expands the existing reporting categories, enabling users to select “Fake profile” and specify “Using AI-generated photos or videos” when reporting suspicious accounts.
The move comes in response to a growing trend where AI-generated images are being employed on dating apps to deceive or scam users. Bumble hopes the new reporting tool will help maintain the integrity of its community by reducing misleading or potentially harmful interactions. Risa Stein, Bumble’s vice president of product, emphasized the importance of creating a safe environment for building genuine connections, stating that the company is committed to enhancing its technology to ensure user safety.
This initiative follows Bumble’s earlier introduction of the “Deception Detector,” an AI-powered tool designed to identify and remove fake profiles, spammers, and scammers. Since its launch, Bumble claims a significant reduction in member reports related to spam, scams, and fake profiles. Additionally, Bumble utilizes an AI-driven “Private Detector” to automatically blur inappropriate images shared on the platform.
While Bumble remains vigilant against misuse of AI in its dating ecosystem, company founder Whitney Wolfe Herd has expressed futuristic views on the potential of AI in online dating. Herd proposed the concept of AI “dating concierges” that could conduct numerous dates on behalf of users to find ideal matches, hinting at potential future developments in the industry.
As Bumble continues to innovate with technology to improve user experience and safety, the effectiveness of these measures will likely shape the future landscape of online dating platforms, balancing technological advancements with user trust and security concerns.