In a rare coalition, some of the world’s most influential media figures, authors, and artists are intensifying efforts to protect their intellectual property against artificial intelligence (AI) companies. Rupert Murdoch, alongside figures like Radiohead’s Thom Yorke, actors Kevin Bacon and Julianne Moore, and Nobel laureate Kazuo Ishiguro, is leading the charge to force AI giants to compensate creators for content used in AI training. At the center of their claim: AI firms are exploiting copyright-protected works without fair licensing, posing a significant threat to the livelihoods of creative professionals worldwide.
This escalating battle reflects a profound anxiety within the creative industries about AI’s economic impact. News Corp, owned by Murdoch, has filed a lawsuit against Perplexity, an AI-powered search engine, for “illegally copying” U.S. newspaper content, including The Wall Street Journal and The Sun. In parallel, more than 25,000 authors, musicians, actors, and other artists have publicly condemned AI firms for exploiting intellectual property under “fair use” claims. This statement marks a watershed moment in the defense of copyright protections as creators resist AI’s march into their industry, challenging the practice of using unlicensed material to train algorithms for profit.
Economic Stakes in Content Licensing: A Complex Global Issue
AI companies rely on vast amounts of data from text, music, images, and other forms of content to build and refine their algorithms, making intellectual property a valuable commodity. In the U.S., record labels are suing AI-powered music companies Suno and Udio for unauthorized use of copyrighted tracks, while best-selling authors like John Grisham and George R.R. Martin are taking OpenAI to court over alleged copyright breaches in its generative text models.
News publishers, too, are confronting AI companies with demands for fair compensation, resulting in some content-sharing agreements. For example, Politico’s parent company Axel Springer, Condé Nast, and the Financial Times have struck deals with AI companies to license content, with News Corp securing a notable five-year agreement with OpenAI reportedly valued at $250 million. In contrast, The New York Times has opted to pursue legal action, filing a lawsuit against OpenAI while also sending a “cease and desist” letter to Perplexity.
Diverging Policy Approaches
While the U.S. focuses on courtroom battles, AI firms in the UK are aggressively lobbying to ease restrictions on commercial data mining. UK law currently permits text and data mining for research purposes only, though Microsoft CEO Satya Nadella argues for a reinterpretation of “fair use” to facilitate AI advancements. Nadella asserts that generative AI systems do not merely “regurgitate” original data, which would infringe copyright, but rather transform it—an argument that may influence legal definitions of AI-generated content.
Feryal Clark, the UK’s new minister for AI and digital government, has voiced intentions to mediate this debate, considering legislative amendments that could support AI innovation while addressing copyright concerns. Dan Conway, CEO of the Publishers Association, warns that weakening UK copyright laws would harm both creators and the broader economy, framing licensing agreements as essential business expenses in the age of AI.
Strained Business Models and Job Security: The Economic Risks of AI Integration
Amid the legal and regulatory skirmishes, AI integration is emerging as a double-edged sword for news media organizations. On one hand, AI tools present opportunities for efficiency in data processing and content categorization, allowing journalists to focus on investigative reporting and exclusive stories. Yet, many organizations see AI as a means to cut costs, raising concerns about the future of editorial roles and quality journalism. Newsquest, one of the UK’s largest regional publishers, has rapidly expanded its AI-assisted editorial roles, while BuzzFeed, struggling financially, has leaned heavily on AI-generated content to sustain traffic amid substantial revenue losses.
This shift has led the National Union of Journalists (NUJ) to launch the “Journalism before Algorithms” campaign, which warns that the integration of AI, if misused, could undermine journalistic integrity and further strain an industry already facing stagnant wages and mass layoffs. The NUJ’s position emphasizes the importance of AI as an assistive tool, not a substitute for human-driven reporting.
Niamh Burns, a senior research analyst at Enders Analysis, notes that news organizations are generally proceeding cautiously, especially those with established brands focused on editorial quality. Publishers with significant commercial pressures are more inclined to embrace AI tools extensively, but this approach risks long-term damage to quality and competitiveness. AI-generated content may offer short-term gains for media groups reliant on ad revenues tied to high traffic, yet it is unlikely to replace the value of original journalism for top-tier outlets.
A Global Struggle Over Intellectual Property
The broader implications of this struggle extend beyond media to the core of intellectual property rights in the digital era. For creators, AI’s potential to reproduce copyrighted material poses an existential threat, while AI firms argue that looser copyright constraints are essential for technological progress. This tension between protecting intellectual property and fostering innovation reflects a deeper challenge in adapting copyright laws to the realities of an AI-driven economy.
As AI continues to redefine the media landscape, governments, publishers, and creators face a complex balancing act: finding a way to support innovation without sacrificing the value of human creativity. This ongoing battle between AI companies and the creative industry will likely shape global copyright norms for years to come, underscoring a crucial question—who owns the data that fuels the next generation of technological progress?