Natural language processing

News readers are open to AI assisting journalists, but not replacing them entirely

News readers are open to AI assisting journalists, but not replacing them entirely



summary
Summary

A recent study reveals that news consumers are becoming more open to artificial intelligence in journalism, but with clear boundaries and specific conditions.

The study, conducted by market research firm CRAFT with the Reuters Institute for the Study of Journalism, found that acceptance of AI in news production varies widely depending on its application. Researchers surveyed people in Mexico, the UK, and the US.

“The audience is most likely to agree with generative AI being used behind the scenes to support journalistic practices that are not visible to the audience but promote news production,” says Konrad Collao, head of the study. Many also see benefits in presenting news in new formats, such as summaries or personalization.

No good journalism without emotion

People are far more skeptical of AI fully automating content creation. “News consumers feel least comfortable when AI is used to create synthetic content,” says Collao.

Ad

Respondents worry this could reduce journalistic quality and remove human perspectives and emotion. The exception is purely factual information like sports scores or stock prices.

Most agree AI use should be disclosed, but not in every case. Many think disclosure isn’t needed for background tasks, but is essential for content production.

News readers are open to AI assisting journalists, but not replacing them entirely
Image: Craft, Reuters Institute

The study also found people reject different standards for AI use across topics.”Although some topics are seen as more or less important, news consumers do not accept that news should be more or less truthful or accurate across different topics. The same good journalistic principles should apply to all topics,” notes Collao.

“Check, check and check again”

Based on the results, researchers recommend news organizations use AI mainly behind the scenes and for new presentation formats, while being cautious about AI-generated content. They should also be transparent about AI use without overstating it. The study recommends five principles for the use of AI in news production:

  1. Human review: all content should be reviewed by a human regardless of AI use. This is considered a basic good working practice.
  2. Acceptable AI use: AI is most acceptable in assisting journalists with simple tasks and rephrasing content. It is least acceptable in the synthetic creation of content.
  3. Exception for objective facts: AI use is acceptable for generating content with objectively verifiable facts if this is disclosed. One example is the BBC’s automated reporting of election results.
  4. Acceptability for illustrative images: AI is acceptable for creating supporting, stylized images, but not for realistic depictions, especially for important topics such as war or politics.
  5. Exceptions for less important topics: For less consequential topics, where realism is not required and no people are depicted, the rules for realistic depictions apply less strictly.

Collao advises news organizations: “Check, check and check again – a basic principle of good work and journalistic practice that is brought into sharper focus by the advent of generative AI. Ultimately, audiences believe that almost everything that is published should be checked by a human.”

Recommendation

The study’s authors describe a critical turning point closely tied to AI. In one scenario, people could lose trust in all information. In another, trust in news brands could rise or remain if their status as responsible actors is strengthened.

This trust must be “earned, re-earned, and maintained,” and news organizations’ own handling of generative AI could have an impact on how this plays out.

News readers are open to AI assisting journalists, but not replacing them entirely

Source link