Axios reports that Google resists the European Union (EU) 's proposals to legally enforce fact-checking integration into its search results and YouTube videos.
The tech giant also refuses to incorporate fact-checking into its ranking systems and algorithms, a requirement under the EU, which is a requirement under the EU's new Code of Practice on Disinformation.

The Code of Practice, first suggested in 2018 and finalized in 2022, offers voluntary measures for tech platforms to prevent misinformation. However, the EU is working to make these commitments legally binding under its Digital Services Act (DSA).
Google has refused to comply, citing inefficiencies and unsuitability. It has stated that it will withdraw from all fact-checking commitments before they become enforceable under the DSA.
In a letter to Renatr Nikolay, Deputy Director General of the EU's content and technology division, Google's global affairs chief, Kent Walker, defended the company's current content moderation procedures. Walker underlined that Google's current technologies, such as Synth ID for marking AI-generated videos and contextual annotations on YouTube, are effective. He also cited Google's successful handling of global elections 2022, arguing against the need for mandatory fact-checking mechanisms.
This stance is consistent with broader industry developments. Last week, Meta announced discontinuing its fact-checking program, opting for a user-driven Community Notes system similar to X (formerly Twitter). Meta CEO Mark Zuckerberg criticized the EU's expanding regulatory landscape, claiming it stifles innovation.

However, the EU Commission maintains its position. The organization argues that stronger content moderation is necessary to combat misinformation, citing the dangers of fake news during the COVID-19 pandemic. As a result, the EU intends to institutionalize fact-checking methods as a critical element of its approach to online regulation.
While Google claims that its current strategies provide users with adequate information, its refusal to incorporate fact-checking directly into its algorithms raises concerns about how tech giants should balance content moderation, innovation, and regulatory compliance in an era of widespread misinformation.