The European Union is heightening its supervision of major tech platforms like Meta, Google, and TikTok. These platforms are undergoing stringent disinformation “stress tests” as part of the EU’s digital regulations. This action aims to see if the platforms adhere to the Digital Services Act (DSA) and can protect democratic integrity during increasing threats.

1. A New Regulatory Era in the EU
The EU passed the Digital Services Act (DSA) in November 2022. This extensive regulation requires platforms with over 45 million monthly EU users—including Meta, Google (via YouTube/Search), and TikTok—to actively handle systemic risks such as disinformation, misinformation, and election interference. Now, these platforms must reveal how they make content moderation decisions, share details on how their algorithms work, and be transparent in their advertising methods.

The EU Code of Practice on Disinformation, which was adopted in 2022, works with the DSA as a voluntary industry guideline. Platforms that promise to reduce misinformation have to report their actions publicly across all 27 EU member states. But, the first reports that were released were not good enough. Many NGOs and regulators thought the transparency data was either not sufficient or not consistent, which made them worry about how serious the platforms were about their commitments.

2. Meta Faces Formal Scrutiny Over Election Disinformation
In April 2024, the European Commission started formal investigations into Meta’s platforms (Facebook and Instagram) for possible DSA violations. These include poor protection against misleading political ads, not enough transparency in how content is demoted on Instagram, and the removal of its CrowdTangle insight tool, which is important for third-party monitoring of election-related content. Meta claims it is creating alternative systems, while regulators are still checking compliance and could fine the company up to 6% of its global revenue, or even temporarily ban it.

3. TikTok’s Poor Performance in Mock Disinformation Tests
Global Witness, an international NGO, ran a mock campaign before the EU parliamentary elections in Ireland. They sent 16 fake ads to TikTok, YouTube (owned by Google), and X. TikTok approved all 16 ads, but YouTube flagged 14 of them, and X blocked the bogus accounts and suspended them completely. This failure was alarming because TikTok’s moderation methods seemed unreliable, especially considering the political importance and the platform’s young user base.

After the failed test, TikTok admitted that policy breaches happened because of human moderation errors and started new review processes. Yet, experts say that these changes don’t ensure consistent protection across different languages or situations.

4. Romania’s Election Failure and Subsequent Stress Test
In late 2024, Romania canceled a presidential election after information came out about manipulative interference—especially on TikTok—that favored Călin Georgescu, a fringe nationalist candidate supposedly supported by Russian influence networks. Because of this, Romanian regulators invited Meta, Google, TikTok, and X to take part in an official “stress test” before the rerun vote in May 2025. This exercise is part of a wider effort to apply Digital Services Act rules, testing the platforms’ ability to monitor, find, and stop election-related disinformation in real time.

5. EU Content Inquiries Regarding Conflicts
Brussels has called for in-depth answers from Meta and TikTok concerning how they are moderating misinformation pertaining to the Israel-Hamas conflict. These legally compelling requests are part of DSA mandates designed to limit the dissemination of hate speech, terrorist-related propaganda, and manipulated media during times of crisis. Meta responded, emphasizing its crisis response centers and round-the-clock monitoring, but it still has deadlines to demonstrate compliance.

6. Google’s Reluctance to Incorporate Fact-Checking
Google has opposed EU proposals to integrate fact-checking from third parties into its Search and YouTube platforms, arguing that such integration lacks effectiveness and does not align with its business strategies. The decision precedes the enforcement of the DSA’s mandates against disinformation, indicating a growing conflict between the EU and Google’s approach to compliance.

7. Ongoing Deficiencies in Transparency and Accountability
Even as companies regularly submit reports as part of the Code of Practice, skepticism remains among civil society groups and regulators. These reports are frequently unverifiable, inconsistent across different countries, and not subjected to audits by independent parties. TikTok took down hundreds of thousands of accounts that were fake, but critics are wondering whether the platform detected systemic disinformation risks or relied too heavily on automated methods. Authorities from the European Commission, including Vice President Věra Jourová, have stressed the importance of external verification and cautioned that self-regulation is becoming insufficient.

8. The Greater Implications: AI, Deepfakes, and Election Integrity
The EU’s evaluations and investigations are occurring amidst increasing concerns about misinformation created by AI. As the next European Parliament elections approach, experts caution that generative AI could amplify disinformation through deepfakes, fabricated news stories, and lies specific to the election, which further complicates moderation systems. According to the DSA, platforms have to promptly identify high-risk content and work with authorities and independent observers. Failure to adequately address these risks could lead to enforcement actions, possibly including significant fines or restrictions on features.

9. Summary – What Does This Mean for Meta, Google, and TikTok?
Meta: Is facing continuous investigations into its transparency in advertising, moderation systems, and data access tools. It intends to discontinue political advertisements in the EU by October 2025 under new legislative rules (TTPA), yet regulators are still seeking assurances regarding broader protections against disinformation. TikTok: Performed poorly in simulated disinformation tests and is now under repeated scrutiny through investigations. The election incident in Romania highlighted the platform’s increased risk—and led to the initial formal stress test involving several platforms. Google/YouTube: While their presence is substantial across Europe, Google is unwilling to comply with commitments to fact-checking, and its moderation policies are facing demands for greater transparency. Observation continues regarding compliance with DSA obligations.

10. Conclusion: Accountability in the Digital Era
The EU’s evaluations and enforcement measures demonstrate a broader change: social media companies now have to actively prove their ability to safeguard democratic processes across many areas. The previous approach of self-regulation is being supplanted by legally binding standards, transparency mandates, and monitoring by independent parties.

Through these actions under the DSA, platforms are being compelled to demonstrate whether they have the ability to effectively combat disinformation, especially as elections, crises, and geopolitical flashpoints approach. Meta, TikTok, and Google are no longer just observers—they are now essential responders in Europe’s effort to maintain digital democracy.

As enforcement tools improve, the real test lies not in corporate promises, but in their capacity to implement genuine change—reducing misinformation, enhancing transparency, and resisting manipulation by foreign or malicious entities. In this changing environment, the EU is positioned to establish a global standard for regulation of technology and resilience of democracy.

Leave a Reply

Your email address will not be published. Required fields are marked *