Lesson: Dealing with algorithmic bias in news

Graphic for a Media Helping Media Lesson PlanThis lesson plan is designed to help journalists recognise and deal with algorithmic bias in the news production process.

It is based on the article ‘Dealing with algorithmic bias in news‘, which we recommend trainers read before adapting this outline for your own purposes.

Introduction

This training day is designed to help journalists and media managers understand the mechanics of algorithmic bias, its impact on news production and consumption, and the ethical responsibilities of newsrooms in a data-driven environment. By the end of this course, participants will be equipped to identify and mitigate bias in their editorial workflows.

Sessions timetable

09:00–10:00 – Session 1: Understanding algorithmic bias.

  • Aims: To define algorithmic bias and identify how it manifests in the news cycle.
  • Presentation: An overview of how algorithms function in news curation, social media distribution, and automated content generation. Explanation of how bias is often inherited from the data used to train these systems.
  • Activity: Participants review a list of recent news headlines curated by an AI and identify potential skew in perspective or demographics.
  • Discussion: Why is it a mistake to assume that technology is inherently objective or neutral?

10:00–11:00 – Session 2: Sources of bias in data.

  • Aims: To explore where bias originates, from training data to developer assumptions.
  • Presentation: A look at historical data bias, lack of diversity in tech development teams, and the feedback loops created by user engagement metrics.
  • Activity: Small groups analyse a hypothetical target audience grouping for a news organisation and then list three potential groups that might be under-represented or misrepresented.
  • Discussion: How does our own newsroom data (e.g., click-through rates) influence what stories we prioritise?

11:00–11:15 – Break

11:15–12:45 – Session 3: Impact on diversity and inclusion.

  • Aims: To assess how algorithms can marginalise specific communities and narrow the public’s worldview.
  • Presentation: Examination of the echo chamber effect and how automated systems may suppress stories from minority or under-served voices in favour of viral, often polarising, content.
  • Activity: A simulation where participants must outsmart a social media algorithm to ensure a high-quality but niche public interest story reaches a wide audience.
  • Discussion: Is it the journalist’s role to counteract the algorithm, or the platform’s responsibility to change it?

12:45–13:45 – Lunch

13:45–15:00 – Session 4: Ethics and editorial oversight.

  • Aims: To establish the importance of human-in-the-loop systems.
  • Presentation: Best practices for editorial intervention. Discussing the MHM principles of transparency, accountability, and the need for regular algorithmic audits.
  • Activity: Draft a set of five golden rules for your newsroom regarding the use of AI-generated summaries or automated news feeds.
  • Discussion: At what point does an automated process require a human editor’s sign-off?

15:00–15:15 – Break

15:15–16:15 – Session 5: Practical mitigation strategies.

  • Aims: To provide tools and techniques for identifying bias in real-time.
  • Presentation: Introduction to algorithmic auditing tools and the bias checklist for journalists using AI tools for research or distribution.
  • Activity: Using a provided case study of a biased algorithm, groups must propose a technical or editorial fix to level the playing field.
  • Discussion: How can we communicate algorithmic transparency to our audience to build trust?

16:15–17:00 – Session 6: The future of AI in the newsroom.

  • Aims: To look forward at emerging trends and long-term newsroom strategy.
  • Presentation: The evolution of generative AI and the potential for personalisation without polarisation.
  • Activity: Participants brainstorm one innovative way their newsroom could use algorithms to increase, rather than decrease, content diversity.
  • Discussion: Final Q&A and reflection on the day’s learnings.

Assignment

Participants are required to conduct an audit of one automated or data-driven process currently used in their newsroom (e.g., social media scheduling, newsletter personalisation, or the most read sidebar). They must produce a 500-word report identifying potential bias risks and proposing three actionable steps to mitigate those risks.

Materials needed

  • Projector and presentation slides.
  • Handouts of the MHM article: Dealing with algorithmic bias in news.
  • Flip charts and markers.
  • Laptops or tablets with internet access for research activities.

Assessment

  • Participation: Engagement in group activities and contributions to plenary discussions.
  • Performance: Ability to identify bias in the provided case studies.
  • Assignment: Quality and practicality of the post-session audit report.

Summary

This lesson plan provides a comprehensive framework for media trainers to educate journalists on the complexities of algorithmic bias. By moving from theoretical understanding to practical mitigation, it ensures that newsrooms remain bastions of accuracy and fairness in the digital age.


Related article

Dealing with algorithmic bias in news