To maintain journalistic integrity and trust, media firms using AI must set clear staff guidelines for newsgathering, production, and output.
At Media Helping Media (MHM) we have set out our policy on AI in our ‘About’ page. It’s summed up as follows:
- We will use AI only to expand upon and support our own published content – not to generate original material.
- We will check all AI-generated material carefully and edit it where necessary for accuracy, coherence and relevance.
- We will always state where content has been produced with the help of AI.
The following is a collection of the best practices followed by leading broadcasters, publishers and news agencies for using AI in news production.
Guidelines
Human control: AI is a tool to augment and assist human journalists, not to replace them. Human creativity, critical thinking, ethical judgment, and accountability will always be paramount.
Accuracy and verification: All AI-generated or AI-assisted content must undergo rigorous human review and fact-checking to ensure accuracy. Journalists should assume all AI outputs as unvetted source material requiring full journalistic diligence.
Transparency: Be open with audiences about the use of AI. Clearly disclose when AI has been used in content creation, presentation, or distribution, explaining how and why, in an accessible and understandable manner.
Accountability: Clear roles and responsibilities must be assigned for the development, deployment, and oversight of all AI systems. A senior editorial figure must be ultimately responsible and accountable for any AI-assisted content.
Ethics: Always consider the potential for bias, misinformation, manipulation, and privacy concerns in AI tools. Proactively work to mitigate these risks and ensure AI use aligns with the highest journalistic ethics.
Copyright: Respect intellectual property rights. Do not input third-party copyrighted materials (especially confidential information) into external AI tools without explicit permission. Actively advocate for fair compensation and permission when AI models are trained on the media house’s content.
Permitted uses
- Transcription: Transcribing interviews, speeches, or audio/video content.
- Translation: Translating articles or content into different languages, always with human review for accuracy and nuance.
- Summaries: Generating internal summaries of long documents, research papers, or articles for internal use, or creating brief article summaries for audience-facing content (with disclosure).
- Headlines: Suggesting headline options, keywords, or meta descriptions to improve search engine optimisation (SEO).
- Data analysis: Processing and identifying patterns/anomalies in large datasets for investigative reporting, financial reports, or public record analysis.
- Content management: Assisting with content categorisation, tagging, and organizing within content management systems (CMS).
- Brainstorming: Generating ideas for stories, angles, or interview questions.
- Drafting: Providing initial drafts or bullet points for routine reports (e.g., weather, market data, sports scores), which are then extensively edited and fact-checked by journalists.
- Visuals: Generating images, illustrations, or basic video elements where original photography/videography is not feasible, provided they are clearly labeled as AI-generated and do not misrepresent reality.
- Personalisation: Powering internal recommendation systems to personalise content for audiences, ensuring algorithmic diversity and avoiding filter bubbles.
- Fact-checking tools: Using AI to assist in cross-referencing facts, identifying potential discrepancies, or flagging suspicious content.
- Deepfakes: Employing AI tools to detect manipulated audio, video, or images, with human analysts making the final determination.
Prohibited uses
- Content generation: AI must not directly generate entire news articles, current affairs reports, or factual journalism content that is published without substantial human editing, restructuring, and direct human authorship.
- Factual research: AI must not be relied upon as the sole or primary source for factual information that will be published. All facts derived or suggested by AI must be independently verified through human journalistic methods.
- Misleading content: AI must not be used to generate or manipulate content (text, image, audio, video) in a way that could materially mislead audiences, distort the meaning of events, or alter the impact of genuine material. This includes “deepfakes” of real individuals in news reporting.
- Sensitive information: Journalists must not input unpublished drafts, confidential source identities, private data, or any other sensitive or proprietary information into external AI tools unless the tool has been explicitly vetted and approved for such use by the media house’s IT and legal departments.
- Circumventing paywalls: AI tools must not be used to bypass content access restrictions (e.g., paywalls) or infringe on the copyrights of other publishers.
- Plagiarism: AI tools must not be used to plagiarise content or mimic the style or voice of other journalists or sources without clear attribution and editorial intent.
Implementation
- AI working group: Establish a cross-functional committee including editorial leaders, journalists, data scientists, IT, and legal representatives to develop, implement, and continually revise AI policies.
- Training: Invest continuously in training journalists and staff on AI literacy, the responsible use of approved AI tools, prompt engineering, and the ethical considerations of AI. Foster a culture of learning and experimentation.
- Tools: Carefully evaluate and vet all AI tools (both internal and third-party) for their accuracy, potential biases, data security, and compliance with privacy regulations. Prioritise tools that offer transparency into their models and data sources.
- Workflows: Integrate AI tools into existing newsroom workflows with clear protocols on how, when, and by whom they should be used.
- Monitoring: Regularly review the performance of AI tools, assess their impact on efficiency and journalistic quality, and gather feedback from journalists. Policies should be adapt to rapid technological advancements.
- Data: Establish clear guidelines for data collection, usage, storage, and privacy when AI tools are involved. Ensure compliance with data protection regulations.
- Legal: Stay abreast of evolving AI regulations and copyright laws. Ensure all AI practices comply with national and international legal frameworks.
Transparency with audiences
- Clear labels: Where AI has contributed significantly to content creation (e.g., for generating images, heavy summarisation, or automated data reports), use clear, easily understandable labels or disclaimers.
- Policy: Publish a clear and accessible version of the media house’s AI policy on its website, explaining its principles, permitted uses, and commitment to responsible AI.
- Feedback: Provide avenues for audience feedback or questions regarding the use of AI, demonstrating a commitment to accountability and trust.
- Explainers: Consider publishing articles or explainers on how the media house uses AI, demystifying the technology and building public understanding and trust.
By adhering to these guidelines, a media house can work towards strategically leveraging AI to enhance its news production capabilities while upholding the core values of journalism and maintaining the essential trust of its audience.
MHM used AI in the compilation of this report.