OpenAI Uncovers AI-Fueled Political Campaign in the Philippines Ahead of 2025 Elections

Manila – OpenAI has reported the detection and disruption of an AI-powered political influence operation originating from the Philippines, in what appears to be a growing trend in the misuse of generative AI in electoral contexts. The initiative, which involved producing bulk social media comments supporting President Ferdinand Marcos Jr. and discrediting Vice President Sara Duterte, was flagged in OpenAI’s newly released threat report titled “Disrupting malicious uses of AI.”

The campaign, informally labeled “Operation High Five,” relied heavily on OpenAI’s ChatGPT to generate short, stylized messages in English and Taglish for TikTok and Facebook. According to OpenAI, the campaign utilized AI to monitor political discourse, propose thematic replies, and compose promotional messages and data analysis to support its reach.

The operation was linked to Comm&Sense Inc., a public relations firm based in Makati, Philippines. While the firm has not issued a response, OpenAI’s investigation revealed that the campaign’s impact was limited. Most AI-generated comments received negligible interaction, with many posts showing zero engagement.

Despite this, the significance of the operation lies not in its influence, but in its methodology. The structured, multi-phase application of generative AI—analyzing sentiment, producing persuasive language, and simulating engagement—demonstrates how political actors globally could industrialize influence operations using commercial AI tools.

The campaign appears to have launched around February 2025, coinciding with the onset of the Philippine midterm election campaign period. OpenAI noted that five TikTok channels were created to distribute pro-Marcos video content, with engagement artificially inflated by accounts with minimal activity and no follower base.

These patterns of inauthentic behavior violate the policies of both TikTok and Facebook, which prohibit coordinated attempts to manipulate platform algorithms or user perception.

OpenAI categorized the operation as a low-impact (Category 2) influence campaign using its internal IO scale but emphasized the broader implications for AI safety and governance. As generative models become more accessible, the threshold for executing influence operations continues to drop, raising urgent questions about ethical boundaries and regulatory oversight.

This report arrives amid a global push for clearer frameworks governing the use of AI in political communication, election security, and media integrity. As more governments and private actors explore AI’s communicative power, the incident in the Philippines provides an early, concrete example of how such technologies can be redirected from innovation to information warfare.

The revelation underscores the need for collaborative governance, transparency from platform providers, and proactive measures from AI developers to prevent future misuse. As political stakeholders integrate digital tools into their campaigns, the global community must stay ahead of the curve to safeguard democratic discourse.