Manila – In a recent escalation of global efforts to curb the misuse of generative AI, OpenAI has disclosed that it banned several Philippine-based accounts found orchestrating a pro-Marcos online campaign using ChatGPT-generated content. The accounts, operated through the platforms TikTok and Facebook, were reportedly engaged in mass-producing short, partisan comments to manipulate public perception ahead of the 2025 midterm elections.
The operation, dubbed “Operation High Five” due to its liberal use of emojis and an overtly positive tone, was linked to Comm&Sense Inc., a public relations firm based in Makati. The firm, as of this writing, has not responded to media inquiries regarding the accusations.
According to OpenAI’s June 5 report, titled “Disrupting malicious uses of AI,” the campaign used ChatGPT in three distinct phases: analyzing social media discourse, generating hundreds of brief responses in English and Taglish, and producing PR materials and statistical analyses for internal use or presentation to potential clients.
While the AI-generated comments praised President Ferdinand Marcos Jr. and criticized Vice President Sara Duterte, OpenAI classified the campaign as a low-impact Category 2 activity—on a six-point influence operation scale. Despite the organized effort, engagement metrics remained negligible, with most posts receiving few or no interactions.
The report also revealed that the operation created five anonymous TikTok channels in mid-February 2025, timed with the beginning of the election season. These channels posted videos aligned with the Marcos agenda, supported by fabricated comment traffic from accounts with zero followers and minimal profile activity.
“This commenting activity may have been designed to make the TikTok channels look more popular than they actually were,” OpenAI noted.
Facebook saw similar tactics, with suspicious accounts—likely created in December 2024—posting identical messages under news stories by mainstream outlets. Yet even on a high-profile post by ABS-CBN News that attracted over 23,000 comments, the AI-generated ones made only a fractional dent in visibility.
Despite its limited impact, the campaign spotlights the evolving tactics in digital influence warfare, raising concerns about AI’s role in shaping political narratives at scale. Both Facebook and TikTok maintain policies against such coordinated inauthentic behavior, though enforcement remains a complex challenge.
This revelation marks one of the first publicly documented cases of generative AI being weaponized in a targeted regional campaign, emphasizing the growing responsibility of AI developers, regulators, and platforms to preempt disinformation efforts before they take deeper root.