Dunya Simões
“They’re stealing our jobs,” they say. Except, this time it’s artificial intelligence (AI). AI could eventually replace approximately 300 million full-time jobs, but journalism is not one of them.
Unconventional news outlets have, understandably, raised concerns within the journalism industry. Platforms such as NewsGPT have been known to deliver fact-based news generated entirely by AI. Elsewhere, in January 2023, CNET confirmed that it had quietly released several feature articles written by AI.
When asked “Do you intend on replacing journalists?”, ChatGPT itself reassuringly declares that “it is unlikely that AI will completely replace human journalists” as “it cannot replicate the full range of skills and experiences that human journalists bring to the field”.
AI: A double-edged sword
“like everything technology-related, there is always room for error.”
In an ever-changing media ecosystem, AI can certainly assist the newsroom and its investigations. Its ability to translate and compute data means that it may be able to responsibly complement the work of journalists. In fact, the Associated Press began using AI as far back as 2014 to produce stories on corporate earnings and divert reporters from using resources on repetitive coverage.
However, like everything technology-related, there is always room for error. In fact, these very same CNET financial articles sparked controversy as they were found to include false information. While these language models’ ability to synthesise information is impressive, factual errors are undoubtedly a factor that prevent them from entirely replacing journalists.
“AI’s admittedly remarkable use of language is no replacement for the authenticity and individual styles of human journalists”
A significant limitation of the AI generative models is its “hallucinations,” meaning it sometimes fabricates information to fill gaps in its knowledge, whereas human journalists are required to always investigate and fact-check to ensure quality reporting. Preslav Nakov, Department Chair of Natural Language Processing at the Mohamed bin Zayed University of Artificial Intelligence in Abu Dhabi, agrees that these models have difficulty discerning their limitations and struggle to know when they don’t know.
OpenAI openly discloses that the chatbot is possible of producing false information, admitting that “sometimes writes [ChatGPT] plausible sounding but incorrect or nonsensical answers”. The corporation goes on to acknowledge that rectifying the issue is “challenging” due to several logistical reasons. This unquestionably makes AI a double-edged sword in that it can provide some assistance, yet its absence of reliability can simultaneously jeopardise the newsroom.
Should journalists be worried about AI?
As well as logistical difficulties, AI’s admittedly remarkable use of language is no replacement for the authenticity and individual styles of human writers. I believe it is safe to say that we, as readers, often savour the voice of journalists and commend the commentaries of specialised publications, which AI cannot compensate with its saturated use of complex vocabulary and jargon.
“it may be the misuse of AI by other humans that hinders journalists the most.”
AI currently creates content based on existing information. Hence, it lacks the valued originality and analytic lens of its human rivals. Madhumita Murgia, AI Editor at the Financial Times, comments: “I would like to be really optimistic about the original human voice, that nothing can ever replace us.”
“I definitely believe that, where language models are today, they are not creative or original or generating anything new in any way.”
Humans remain the biggest threat
It is the misuse of artificial intelligence by other humans that hinders journalists the most. One of the biggest challenges reporters must navigate is ensuring sources are entirely credible, avoiding the influence of “fake news”. As new versions of artificial intelligence become widely accessible, navigating between real and fake is proving to be increasingly difficult. Inevitably, this is likely to cause a ‘moral panic’ amongst audiences with some readers distrusting media outlets without reason.
“AI is a proven weapon in facilitating falsification”
A tweet from verified account “Bloomberg Feed” showcased an AI-generated image of a false explosion at The Pentagon. Russian state media picked the tweet up rapidly which is a clear example of how the misuse of artificial intelligence can provoke an already unstable political environment. Combined with the reachability of social media, AI is a proven weapon in facilitating falsification.
Prime example of the dangers in the pay-to-verify system: This account, which tweeted a (very likely AI-generated) photo of a (fake) story about an explosion at the Pentagon, looks at first glance like a legit Bloomberg news feed. pic.twitter.com/SThErCln0p
— Andy Campbell (@AndyBCampbell) May 22, 2023
So, journalists, it seems like we’ll continue receiving our paychecks for some time to come. But artificial intelligence is perhaps a threat to the core values of journalism, obstructing what upholds the industry: credibility and reliability.
READ NEXT:
-
The Rise of AI and the Future of the Working Class
-
The dangers of AI in pornography
-
TikTok to introduce daily screen time limit for under-18s
Featured image courtesy of Mojahid Mottakin via Unsplash. No changes were made to this image. Image license found here.
Great piece. Congrats Dunya!