SHUTTERSTOCK

Highs and lows: Report card on AI-based journalism's teething phase

Technology

Artificial intelligence is coming to the newsrooms of the world, but how reliable is it and what transparency will it offer?

A recent report by the Reuters Institute for the Study of Journalism found 36 per cent of media consumers are comfortable receiving news that has been produced with the help of artificial intelligence. Newsworthy sat down with experts, and AI itself, to understand what our world could look like with an increased dependence on artificially generated information.

Already, AI is being used to produce traffic and weather reports. As artificial intelligence slides quietly into various facets of modern media consumption, the study of its behaviour and patterns of content production will provide vital insights for people to protect themselves from the spread of misinformation.

Saffron Howden is the National Editorial Training Manager at Australian Community Media. As the head of ACM’s AI Working Group, she has identified a number of key concerns that AI presents, not just for journalists but for consumers as well.

She said AI presented "a lot of threats to the news industry … in the space of AI generated content that is (factually) wrong, and very easily able to proliferate,” she says.

“In the world of disinformation and misinformation, creating content – whether that’s a video or images or something that looks like a news story… you can create that very quickly, it can look hyper-realistic, you can spread it very quickly over social media…that then is part of the information landscape.”

In the US, in the run-up to the November presidential election, there has been a toxic fusion of AI-driven images and politics. In the aftermath of last week's Hurricane Helene in North Carolina, an image of a clearly distressed little girl clutching a puppy in floodwaters went viral. North Carolina is a state Kamala Harris is hoping to flip to Democrat. Trump MAGA supporters said the image highlighted the Biden-Harris administration's poor response to the crisis. It was quickly debunked as an AI-created fake.

Not all misinformation is so easy to spot. When discussing existential issues like “the proliferation of misinformation,” Howden says they can be quite difficult to understand due, in part, to their apparent intangibility.

'People’s starting point is generally one of resistance, suspicion and fear.'

The Reuters Institute's AI and the Future of News report found self-reported awareness of AI and its functions are “relatively low, with less than half (45 per cent) of respondents saying they have heard or read a large or moderate amount about it”.

So, if you can’t identify the misinformation in front of you, let alone scrutinise it in real time, how can you be encouraged to care about it? To highlight just how easy it is for AI to skew an audience’s opinion, Newsworthy decided to sit down with ChatGPT and ask it some questions about contentious issues and recent events.

The focus was placed on two specific events which received widespread coverage from multiple news outlets, one of which underwent a thorough investigation that – importantly - came to a tangible conclusion, and one which is still actively unfolding.

“The tragic incident involving the Hamada family occurred on January 29, 2024, during the Israeli bombardment and invasion of Gaza. The Hamada family was fleeing their home when their vehicle was targeted near a gas station in Gaza City, an area designated as a combat zone by Israeli forces,” according to ChatGPT when asked via text prompt to describe the 2024 Hamada Family Incident.

“The attack left six family members dead, including 15-year-old Layan and her 5-year-old cousin Hind.”

ChatGPT’s interpretation of events that transpired on January 29 in Gaza are mostly correct. Hind was six years old, rather than five years old, and the attack took the lives of seven family members: two adults and five children, rather than six.

One might argue that these mistakes are superficial, and that the information itself is enough for a reader to conclude that however many people died and however old they were, this was ultimately still a tragic loss. There is a much bigger problem present, which continues to play out in ChatGPT’s description of the following events.

ChatGPT, at no point, makes specific reference to the party responsible for the attack, despite its capability to source information from news websites.

A Washington Post investigation published on April 16, 2024, used “satellite imagery, contemporaneous dispatcher recordings, photos and videos of the aftermath, interviews with 13 dispatchers, family members and rescue workers, and more than a dozen military, satellite, munitions and audio experts … as well as the [Israeli Defence Force's] own statements” to produce detailed timelines that showed the Israeli military was present in the area at the time of both attacks.

The report also alleges that Layan herself, in a call with her uncle, explicitly states that the Israeli Army were the ones firing on the Hamada family car.

When queried about other recent major news events, ChatGPT exhibited an inability to disseminate news in an accurate or timely manner. On July 13, a gunman attempted to assassinate former president Donald Trump as he spoke at an election rally in Butler, Pennsylvania.

Nearly 48 hours after the shooting occurred, ChatGPT was still insisting: “There has been no incident involving the shooting of former President Donald Trump."

Setting aside issues of accuracy, at this point in AI's development, there is an inherent clunkiness embedded within AI tools. Everyone acknowledges it will improve exponentially as seen in the jump in capability between ChatGPT 3.5's launch on November 2022 and ChatGPT4 in March 2023. (ChatGPT5 is due by early next year.) So, many journalism experts are pre-emptively looking towards new solutions that AI may provide for an industry with more than its fair share of problems.

“People’s starting point is generally one of resistance, suspicion and fear. However, over the course of testing out a variety of AI use cases in journalism, participants often developed and articulated more nuanced opinions about…the implementation of AI,” the Reuters Institute report said.

This nuance is reflected by Howden’s perspective on use of AI in journalism, which encourages a problem-solving approach.

“There’s a lot of positives in it, potentially. For instance, AI tools might enable us to save time in resource-stretched newsrooms,” she says.

“We’ve got far fewer resources and people working in newsrooms than we had 20 years ago … [Ai can help us] speed up some things, save us time somewhere so that we can get on with doing the stuff that AI will never be able to do, like get out on the streets of Sydney and your local town and talk to people.”

It is important to highlight that artificial intelligence is still in its infancy, and its teething phase will eventually give way to systems and algorithms which are far more confident in their ability to disseminate information – legitimate or otherwise.

“At present, many are clear that there are areas they think should remain in the hands of humans,” says the Reuters Institute's AI and the Future of News report . “These kinds of work – which require human emotion, judgement, and connection – are where publishers will want to keep humans front and centre.”

MOST RECENT

©2019 UNSW Sydney All Rights Reserved.
Logo for Hamburger menu