
Journalists need to be trained in how to recognise and deal with algorithmic bias in order to counter the speed and scale at which biased content spreads.
That’s the view of Gemini, the artificial intelligence (AI) conversation tool, which says that the need for such training is “more urgent than ever”.
According to AI:
“the rapid advancement of artificial intelligence and machine learning has led to a significant increase in the use of algorithms in news production…
… algorithms can introduce bias into news content, which can have serious consequences for both individuals and society.”
Media Helping Media (MHM) asked Gemini how journalists can recognise and deal with algorithmic bias.
We also asked it to design a one-day training course to equip journalists with the tools they need to tackle the issue.
The following is the text of that conversation, which took place in September 2024. During the conversation Gemini states that while the challenge of bias in news production is not entirely new, the rise of AI has made it more urgent and complex.
AI warns that:
“journalists and news organisations must adapt to this new reality by developing strategies to mitigate algorithm bias and ensure that the news they produce is fair, accurate, and representative”.
We then asked AI the following questions. The answers were immediate. Gemini’s responses are unedited other than to change them from American English to English English (MHM’s preferred house style).
MHM: What is algorithmic bias in news production?
Gemini AI: Algorithmic bias in news production refers to the unintentional or intentional bias introduced into news content by the algorithms used to curate, personalise, or generate news. This bias can manifest in various ways, including:
- Selection bias: Algorithms may prioritise certain stories or topics over others, leading to a skewed representation of the news.
- Confirmation bias: Algorithms may reinforce existing beliefs or biases by recommending content that aligns with a user’s preferences.
- Representation bias: Algorithms may underrepresent certain groups or perspectives, leading to a lack of diversity in the news.
- Amplification bias: Algorithms may amplify certain narratives or misinformation, contributing to the spread of false or misleading information.
Factors contributing to algorithmic bias:
- Biased data: If the data used to train algorithms is biased, the algorithms will likely produce biased outputs.
- Biased algorithms: The design of algorithms themselves can introduce bias, such as by prioritising certain features or metrics over others.
- Lack of transparency: The lack of transparency in how algorithms work can make it difficult to identify and address bias.
Addressing algorithmic bias in news production requires a multi-faceted approach, including:
- Improving data quality: Ensuring that the data used to train algorithms is diverse, representative, and free from bias.
- Designing unbiased algorithms: Developing algorithms that are transparent, accountable, and resistant to bias.
- Promoting transparency: Increasing transparency in how algorithms are used and the decisions they make.
- Human oversight: Ensuring that human editors and journalists play a role in reviewing and curating news content.
By understanding and addressing algorithmic bias, news organisations can help to ensure that the news they produce is fair, accurate, and representative.
MHM: How can journalists avoid algorithmic bias if they are not in control of the algorithms?
Gemini AI: While journalists may not have direct control over the algorithms used in news production, they can take several steps to mitigate algorithmic bias:
- Be aware of algorithmic bias: Journalists should understand the concept of algorithmic bias and how it can manifest in news content. This knowledge can help them identify potential biases and take steps to address them.
- Critically evaluate news sources: Journalists should be sceptical of news stories generated or curated by algorithms. They should carefully evaluate the sources of information and verify the accuracy of the content.
- Seek diverse sources: Journalists should strive to gather information from a variety of sources, including those that may not be prominently featured in algorithm-driven news feeds. This can help to ensure that the news they report is balanced and representative.
- Use human judgement: Journalists should rely on their own judgement to assess the quality and relevance of news stories. They should be willing to override algorithmic recommendations if they believe that they are biased or misleading.
- Advocate for transparency: Journalists can advocate for greater transparency in how algorithms are used in news production. This can help to identify and address biases and ensure that the news is produced in a fair and ethical manner.
- Collaborate with data scientists: Journalists can collaborate with data scientists to understand how algorithms work and identify potential biases. This can help them to develop strategies for mitigating algorithm bias.
By taking these steps, journalists can help to ensure that the news they report is accurate, balanced, and free from algorithmic bias.
MHM: So mitigating algorithmic bias is a new task brought about by the advance of AI but should now become a crucial part of a news producer’s job?
Gemini AI: That’s a great observation. While the term ‘algorithmic bias’ may be relatively new, the underlying issue of bias in news production has existed for centuries. However, the advent of AI and machine learning has amplified the potential for bias to be introduced into news content at a larger scale and at a faster pace.
Here’s a breakdown of how traditional news production compared to AI-driven news:
Traditional news production:
- Human curation: News editors and journalists manually select and curate stories based on their judgement and expertise.
- Limited reach: News distribution was primarily through print and broadcast media, limiting the audience and potential for bias.
AI-driven news production:
- Algorithmic curation: Algorithms use data to recommend news stories to users, often based on their past behaviour and preferences.
- Vast reach: News can be disseminated instantly to a global audience through digital platforms, increasing the potential impact of bias.
Key differences:
- Scale: AI can process and analyse vast amounts of data, increasing the potential for bias to be introduced at a larger scale.
- Speed: AI can disseminate news at a much faster pace than traditional methods, making it difficult to identify and correct biases before they have a significant impact.
- Personalisation: AI can personalise news content to individual users, potentially reinforcing existing biases and limiting exposure to diverse perspectives.
Therefore, while the challenge of bias in news production is not entirely new, the rise of AI has made it more urgent and complex. Journalists and news organisations must adapt to this new reality by developing strategies to mitigate algorithmic bias and ensure that the news they produce is fair, accurate, and representative.
MHM: So would you say that dealing with algorithmic bias should now be part of every journalist’s basic training?
Gemini AI: Absolutely. Dealing with algorithmic bias should be a fundamental part of every journalist’s training in the digital age. Here’s why:
- Understanding the problem: Journalists need to understand how algorithms work, how they can introduce bias, and the potential consequences of this bias.
- Identifying and mitigating bias: Journalists should be able to identify signs of algorithmic bias in news content and take steps to mitigate its effects.
- Evaluating sources: Journalists should be able to critically evaluate the sources of information they rely on, especially those that are generated or curated by algorithms.
- Promoting diversity and inclusion: Journalists should strive to promote diversity and inclusion in the news they report, by seeking out a variety of sources and avoiding algorithmic biases that may underrepresent certain groups or perspectives.
- Advocating for transparency: Journalists can advocate for greater transparency in how algorithms are used in news production, which can help to identify and address biases.
By incorporating training on algorithmic bias into journalism programmes, we can equip journalists with the skills and knowledge they need to produce fair, accurate, and representative news in the digital age.
MHM: Would you say that the need for journalists to be trained in dealing with algorithmic bias is urgent?
Gemini AI: Absolutely. The need for journalists to be trained in dealing with algorithmic bias is more urgent than ever. The rapid advancement of artificial intelligence and machine learning has led to a significant increase in the use of algorithms in news production. These algorithms can introduce bias into news content, which can have serious consequences for both individuals and society.
Here are some reasons why this training is urgent:
- Increasing reliance on algorithms: News organisations are increasingly relying on algorithms to curate, personalise, and generate news content. This means that the potential for algorithmic bias to impact news coverage is growing.
- The potential for harm: Algorithmic bias can lead to the spread of misinformation, the amplification of harmful stereotypes, and the underrepresentation of certain groups or perspectives. This can have serious consequences for individuals and society.
- The need for accountability: Journalists have a responsibility to ensure that the news they report is accurate, fair, and unbiased. This includes understanding and addressing the potential for algorithmic bias.
By training journalists in how to identify and mitigate algorithmic bias, we can help to ensure that the news we consume is reliable and trustworthy.
MHM’s conclusion
The fact that AI is warning that journalists need to be trained in recognising and dealing with algorithmic bias suggests that human judgement, journalistic ethics and editorial values are crucial in a world where artificial intelligence and machine learning is crafting so much content.
As Gemini says in the interview above, “the underlying issue of bias in news production has existed for centuries. However, the advent of AI and machine learning has amplified the potential for bias to be introduced into news content at a larger scale and at a faster pace.”
Questions
- What is algorithmic bias in news production, and how can it manifest in news content?
- Identify and describe two factors that contribute to algorithmic bias in news production.
- Explain why the rise of AI has made the challenge of bias in news production more urgent and complex.
- What are some steps journalists can take to mitigate algorithmic bias, even if they do not control the algorithms?
- Discuss the role of human oversight in addressing algorithmic bias in news production.
- How does algorithmic curation differ from traditional human curation in news production?
- Why is it important for journalists to seek diverse sources when reporting news?
- Evaluate the potential consequences of algorithmic bias on society and individuals.
- How can journalists advocate for greater transparency in the use of algorithms in news production?
- In what ways can collaboration between journalists and data scientists help address algorithm bias?
Answers
- Algorithmic bias in news production refers to the unintentional or intentional bias introduced into news content by algorithms. It can manifest as selection bias, confirmation bias, representation bias, or amplification bias.
- Two factors contributing to algorithmic bias are biased data, where the data used to train algorithms is biased, and biased algorithms, where the design of algorithms prioritises certain features or metrics over others.
- The rise of AI has made bias in news production more urgent and complex due to the increased scale and speed at which AI can process and disseminate news, as well as the potential for personalised content to reinforce existing biases.
- Journalists can mitigate algorithmic bias by being aware of it, critically evaluating news sources, seeking diverse sources, using human judgment, advocating for transparency, and collaborating with data scientists.
- Human oversight involves journalists and editors reviewing and curating news content to ensure it is fair, accurate, and representative, thus helping to identify and correct algorithmic biases.
- Algorithmic curation uses data to recommend news stories based on user behaviour and preferences, while traditional human curation relies on the judgment and expertise of news editors and journalists.
- Seeking diverse sources is important to ensure balanced and representative news coverage, avoiding the reinforcement of biases that may underrepresent certain groups or perspectives.
- Algorithmic bias can lead to the spread of misinformation, amplification of harmful stereotypes, and underrepresentation of certain groups, which can have serious consequences for individuals and society.
- Journalists can advocate for transparency by pushing for clear explanations of how algorithms are used in news production and the criteria they use for decision-making.
- Collaboration with data scientists helps journalists understand how algorithms work and identify potential biases, allowing them to develop strategies to mitigate algorithmic bias.
Lesson plan for trainers
If you are a trainer of journalists we have a free lesson plan: ‘Algorithmic bias’ which you are welcome to download and adapted for your own purposes.