“They’re claiming AI is stealing our jobs,” is the common refrain. However, this time, the culprit is artificial intelligence (AI), which may potentially replace around 300 million full-time jobs. Yet, the field of journalism remains an exception to this transformation
There may not be a need for a classification of journalists as human, but in this faculty, skills are identified. In this trade, the beauty of the daily newspaper has increased due to the uses of color, computer e-mail, the internet, and state-of-the-art offset printing press. Nevertheless, these days, the whole canvas of journalism is taking another direction to embrace AI technology.
Unconventional news outlets have understandably sparked concerns within the journalism industry. Platforms like NewsGPT are known for delivering data-driven news entirely generated by AI. In January 2023, CNET (Computer Network, an American media website) discreetly admitted to publishing several feature articles authored by AI. When asked whether AI intends to supplant journalists, ChatGPT itself confidently asserted that “the complete replacement of human journalists by AI is unlikely” because AI cannot replicate the full array of skills and experience that human journalists bring to their craft.
You can also read: Japan Turns To AI and Robots To Tackle Rising Labour Crisis
Artificial intelligence, or AI, is on the cusp of becoming the next technological revolution. This evolution unfolds in a world grappling with a polarized social media landscape, marked by mounting online hate, and amid calls for comprehensive regulation. In various forms, AI has already become an integral part of many people’s daily lives, from providing editing and grammar suggestions in word processors and emails to implementing facial recognition in mobile phones.
Simultaneously, AI has emerged as a looming threat to journalism. It has insinuated itself into issues such as disinformation and propaganda, further disrupting the media landscape. According to a report, AI tools digest content and repackage it in a manner that compromises the principles of rigor and reliability.
EMERGING TRENDS IN GENERATIVE AI
The most recent trend in the field of artificial intelligence is generative AI, which can create seemingly original pieces of text or images to address intricate queries. For example, it can respond to prompts like “Compose an earnings report in the style of poet Robert Frost” or “Illustrate an iPhone in the artistic manner of Vincent van Gogh.”
Some of these generative AI systems, including Open AI’s ChatGPT and Google’s Bard, have undergone training on extensive datasets comprised of publicly available information from the internet, which encompasses journalism and copyrighted artworks. In certain instances, the output generated by these systems closely resembles content from these sources, sometimes even mirroring it nearly word for word.
This phenomenon has raised concerns among publishers who worry that these AI programs could jeopardize their business models. They are concerned that AIgenerated content, lacking proper attribution, may be disseminated widely, leading to an influx of inaccurate or misleading material. Such concerns could erode trust in online news sources.
![](https://i0.wp.com/pressxpress.org/wp-content/uploads/2024/05/image-69.png?resize=873%2C438&ssl=1)
USAGE OF AI IN DIFFERENT INDUSTRIES
Nowadays, 77% of businesses are using or exploring AI. Among them, 35% of companies are using AI and 42% of companies are exploring AI for its implementation in the future
![](https://i0.wp.com/pressxpress.org/wp-content/uploads/2024/05/image-70.png?resize=998%2C363&ssl=1)
![](https://i0.wp.com/pressxpress.org/wp-content/uploads/2024/05/image-71.png?resize=922%2C475&ssl=1)
- By 2030, it’s predicted that up to 30% of jobs will be automated through the use of AI technology.
- According to a survey by MIT Technology Review, 81% of executives believe that AI will be a major competitive advantage for their business.
- By 2024, it’s predicted that the global chatbot market will be worth $9.4 billion.
- The use of AI in customer service can result in cost savings of up to 30%.
- In the finance industry, AI-powered chatbots are projected to save banks $7.3 billion by 2023.
- A study by PwC found that AI could contribute up to $15.7 trillion to the global economy by 2030.
- By 2025, it’s estimated that the number of IoT devices worldwide will reach 75.44 billion, all of which generate data that can be analyzed with AI algorithms.
- As per Gartner, 37% of organizations have implemented AI in some form. The percentage of enterprises employing AI grew 270% over the past four years.
These statistics demonstrate just how significant AI has become in our society and how it’s poised to continue growing in importance in various industries.
CHALLENGES OR OPPORTUNITIES OF CHATGPT FOR JOURNALISM
Since its launch in November, OpenAI’s AI-powered chatbot, ChatGPT, has sparked ongoing discussions among journalists about its potential influence on the news industry. Questions are swirling about the extent to which generative artificial intelligence will replace journalists and the speed at which this transformation will unfold. There’s also curiosity about which journalists might be most susceptible to these disruptions.
Furthermore, the central query remains: should we view ChatGPT as a challenge or an opportunity to address some of the pressing issues that afflict the news industry?
WHAT CHATGPT SAYS?
![](https://i0.wp.com/pressxpress.org/wp-content/uploads/2024/05/image-72.png?resize=1024%2C591&ssl=1)
NOT ENTIRELY A NOVEL CONCEPT
The utilization of AI to support and generate journalistic content is not an entirely new concept; it has been the subject of experimentation by various media outlets for a considerable period. Francesco Marconi, a computational journalist and the founder of Applied XL, who authored “Newsmakers: Artificial Intelligence and the Future of Journalism,” has categorized the evolution of AI innovation in journalism over the past decade into three distinct waves: automation, augmentation, and generation.
In the initial phase, the primary focus was on automating datadriven news pieces, such as financial reports, sports results, and economic indicators, through the application of natural language generation techniques. This phase witnessed numerous examples of news organizations automating certain content, ranging from prominent global entities like Reuters, AFP, and AP to smaller, more niche outlets.
According to Marconi, the second wave emerged with an emphasis on enhancing reporting through the integration of machine learning and natural language processing to analyze extensive datasets and unveil underlying trends.
The current and third wave centers on generative AI, characterized by the utilization of large language models with the capacity to generate comprehensive text at scale. This recent development offers journalistic applications that transcend simple automated reporting and data analysis. Now, we have the ability to instruct a chatbot to craft lengthy, well-balanced articles on specific topics or to generate opinion pieces from distinct perspectives. Furthermore, we can request content in the style of renowned authors or publications.
Since November, the scope of potential applications for this technology has expanded significantly, with journalists themselves often experimenting with the creation and refinement of chatbots.
Madhumita Murgia, an AI reporter at the Financial Times, underscores the reason why tools like ChatGPT have generated immense excitement. These tools are exceptionally user-friendly and adept at engaging in natural language conversations, giving the impression of intelligence, despite fundamentally relying on potent predictive technology.
AI: A TWO-FACED TOOL
In the ever-evolving media landscape, AI undeniably offers valuable support to newsrooms and their investigative efforts. Its capacity to process and compute data suggests that it can effectively complement the work of journalists. As early as 2014, the Associated Press began employing AI to generate corporate earnings stories, thereby freeing up journalists from repetitive coverage tasks.
However, as is the case with all technology, there is a potential for error. These same CNET financial articles, for instance, stirred controversy due to the inclusion of erroneous information. While the ability of these language models to synthesize information is undeniably impressive, practical limitations prevent them from entirely supplanting journalists. A notable drawback of AI generative models is their tendency to engage in “hallucinations,” wherein they occasionally fabricate information to fill gaps in their knowledge. In contrast, human journalists consistently rely on investigation and fact-checking to ensure the quality of their reporting.
OpenAI openly acknowledges that its chatbot is capable of producing false information, conceding that “sometimes [ChatGPT] writes logically sound but incorrect or nonsensical answers.” The company admits that rectifying this issue is “challenging” for various logistical reasons. This dual nature of AI makes it a two-edged sword – capable of offering assistance while also posing a risk to the integrity of newsrooms.
![](https://i0.wp.com/pressxpress.org/wp-content/uploads/2024/05/image-73.png?resize=1024%2C450&ssl=1)
CONCERNS OF AI FROM JOURNALISTS
Setting aside logistical challenges, it’s evident that AI’s impressive language capabilities are no match for the authenticity and unique style of human writers. It’s safe to say that as readers, we often derive enjoyment from the distinctive voices of journalists and value the insightful perspectives of specialized publications. AI, with its intricate vocabulary and excessive use of technical jargon, falls short of compensating for these human qualities.
AI generates content based on existing information, and as a result, it lacks the originality and analytical insight that human journalists bring to the table. As Madhumita Murgia aptly noted, “I want to maintain a strong optimism about the irreplaceable human voice, as nothing can replicate us.”
“I firmly believe that, based on where language models stand today, they lack the creativity, originality, or capacity to generate anything genuinely new.”
![](https://i0.wp.com/pressxpress.org/wp-content/uploads/2024/05/image-74.png?resize=703%2C413&ssl=1)
HUMANS POSE THE GREATEST THREAT
The most significant impediment faced by journalists often stems from the misapplication of artificial intelligence by other humans. One of the paramount challenges that journalists grapple with is ensuring the absolute credibility of their sources to combat the scourge of “fake news.” As increasingly advanced iterations of artificial intelligence become readily accessible, distinguishing between reality and falsehoods becomes progressively more challenging. This inevitably can sow seeds of “moral panic” among the audience, leading some readers to unjustly mistrust media outlets.
For instance, a tweet from the verified “Bloomberg Feed” account featured an AI-generated image depicting a fabricated explosion at the Pentagon. Russian state media swiftly seized upon this tweet as a glaring illustration of how the misuse of artificial intelligence can exacerbate an already volatile political climate. When combined with the widespread reach of social media, AI emerges as a potent tool for deception.
So, it appears that journalists will continue to draw their paychecks for the foreseeable future. Nonetheless, artificial intelligence may potentially jeopardize the fundamental tenets of journalism that serve as its bedrock: credibility and reliability.
AI, as it currently exists in the market, exemplified by opensource platforms like ChatGPT or CNET, a tech-driven news outlet recently scrutinized for employing AI-generated articles, aims to replicate human capabilities. In the realm of journalism, AIpowered tools can churn out articles or reports, much like the traditional role of journalists. However, industry experts caution that these tools have limitations, as the data and analyses they produce can often be fraught with inaccuracies and misleading information.
7 GUIDING PRINCIPLES FOR THE ADVANCEMENT AND OVERSIGHT OF GENERATIVE AI
Digital Content Next, a consortium representing over 50 prominent US media organizations, including News Corp. (the parent company of The Washington Post and The Wall Street Journal), recently unveiled a set of seven guiding principles aimed at “developing and managing generative AI.” These principles encompass critical aspects related to security, intellectual property compensation, transparency, accountability, and equity.
These principles are intended to serve as a foundation for future discussions and negotiations. They include provisions such as “Publishers have the right to engage in negotiations and receive equitable compensation for the utilization of their intellectual property” and “Implementers of Generative AI systems should be held accountable for the outcomes of these systems,” as opposed to conforming to industry-defined standards. Digital Content Next has shared these principles with its board and pertinent committees.
The “Principles for Development and Governance of Generative AI” established by Digital Content Next are as follows:
- Developers and operators of Generative AI must honor the rights of content creators.
- Publishers possess the entitlement to engage in negotiations and receive just compensation for the use of their intellectual property.
- Copyright laws safeguard content creators against unauthorized use of their creations.
- Generative AI systems should exhibit transparency to both publishers and users.
- Operators of Generative AI systems bear responsibility for the outcomes produced by these systems.
- Generative AI systems should not create or pose risks of unfair market or competitive outcomes. 7. Generative AI systems should prioritize security and address privacy concerns.
DRAWING LESSONS FROM HISTORY
Generative AI introduces both promising efficiencies and potential challenges to the news industry. This technology has the capacity to generate new content, such as games, travel guides, and recipes, which offer convenience to consumers while aiding in cost reduction. However, the media sector also harbors apprehensions regarding the impact of AI. In recent years, digital media enterprises have witnessed significant disruptions to their business models as social media and search giants, notably Google and Facebook, have reaped the benefits of the digital advertising realm. For instance, Vice declared bankruptcy last month, and the stock of news outlet BuzzFeed has been trading below $1 for over 30 consecutive days, prompting the company to receive a notice for delisting from the Nasdaq stock market.
COMBATING INERTIA
Beyond financial considerations, the foremost concern regarding AI for news organizations revolves around the imperative to distinguish fact from fiction.
“In a general sense, I hold an optimistic view of this technology for our industry, with a significant caveat that the technology carries substantial risks when it comes to authenticating content,” remarked Chris Berend, the Head of Digital at NBC News Group. He expressed his hope that AI will collaborate with human journalists in newsrooms rather than replacing them.
Already, there are indications of AI’s potential to disseminate misleading information. Last month, a verified Twitter account under the name “Bloomberg Feed” posted a fabricated image of an explosion near the Pentagon in Washington, D.C. Although the falseness of this image was rapidly exposed, it briefly impacted stock prices. More advanced manipulations of this kind can engender greater confusion and unwarranted panic, causing damage to reputations. It’s worth noting that the Twitter account “Bloomberg Feed” had no association with the media company, Bloomberg LP. “It’s poised to be a turbulent start,” commented VandeHei. “We can anticipate a widespread proliferation of substantial misinformation. Is it real or not? This quandary adds to a society already grappling with the challenge of distinguishing reality from falsehood.
” While the U.S. government possesses the potential to regulate the development of AI by big tech firms, VandeHei noted that the pace of regulation is likely to lag behind the rapid deployment of the technology.
GAZING TOWARD THE FUTURE
Both Murgia and Marconi emphasize the indispensable role of journalists in amalgamating information, offering context, and uncovering stories. However, Marconi envisions this task becoming increasingly challenging.
“The proliferation of data stemming from sources such as the internet, sensors, mobile devices, and satellites has given rise to a world inundated with information. We are presently generating more data than ever before in history, making the filtration of relevant data an intricate endeavor,” he explains.
In Marconi’s view, this is an arena where AI can substantially assist in alleviating the burden on human journalists. “AI should not solely be viewed as a tool for generating additional content but also as a means to aid us in the process of filtration,” he asserts. “Some experts anticipate that by 2026, as much as 90% of online content could be generated by machines.
This signifies a pivotal juncture, necessitating our focus on creating machines capable of sieving through the noise, distinguishing truth from falsehood, and spotlighting what truly matters.”
Marconi believes that journalists should actively partake in the development of novel AI tools. This involvement could encompass crafting editorial algorithms and applying journalistic principles to emerging technologies.
![](https://i0.wp.com/pressxpress.org/wp-content/uploads/2024/05/image-75.png?resize=363%2C428&ssl=1)
“The news industry must be an active participant in the AI revolution,” he contends. “In fact, media companies are well-positioned to assume a significant role in this domain, as they possess some of the most valuable resources for AI advancement: textual data for training models and ethical guidelines for erecting dependable and principled systems.”