A University of Sydney analysis highlights how Microsoft Copilot’s AI-generated news summaries largely overlook Australian outlets, raising concerns over local media visibility and the potential erosion of journalism revenue amid regulatory and accuracy challenges.
A University of Sydney analysis has found that Microsoft Copilot’s AI-generated news summaries largely omit Australian journalism, instead surfacing US and European outlets. According to the study, only about one in five Copilot responses pointed users to Australian media, a pattern the lead researcher says risks amplifying existing weaknesses in the local news ecosystem. (Sources: 2,1)
Dr Timothy Koskie, of the university’s Centre for AI, Trust and Governance, led the review of 434 Copilot summaries and told Guardian Australia that the tool “basically sidelined Australian news” and that “Australians are invisible in this.” He reported that major international sites such as CNN and the BBC frequently appeared even when the user was located in Australia, and that smaller independent and regional outlets were rarely represented. (Sources: 1,2)
Researchers warn that when readers accept AI summaries without visiting original articles, publishers lose referral traffic and advertising revenue, intensifying financial pressures on an industry already contending with concentrated ownership and regional news deserts. According to the Reuters Institute, the shift from search engines to AI-driven answer interfaces could further erode referral flows and undermine publishers’ business models. (Sources: 1,2,7)
The study’s findings sit uneasily alongside closer ties between Microsoft and the University of Sydney. In March 2024 the institutions signed a memorandum to collaborate on AI research and education, and the university has built campus AI tools on Microsoft’s Azure OpenAI Service. Those partnerships illustrate the complexity of engaging with the companies whose systems are reshaping how news is discovered and consumed. (Sources: 3,6)
Regulatory pressure adds another dimension. The Australian Competition & Consumer Commission has sued Microsoft alleging misleading conduct over Microsoft 365 subscriptions and the marketing of Copilot, a lawsuit that highlights consumer and competition concerns tied to the company’s rollout of AI features in Australia. (Source: 4)
Broader audits of AI news summarisation raise additional red flags. A BBC report found substantial inaccuracies across multiple AI engines, including ChatGPT, Copilot and Google’s Gemini, with many outputs blurring fact and opinion or introducing errors when citing source material. Those accuracy problems compound the risk that AI summaries will amplify misinformation while failing to reflect local perspectives. (Source: 7)
Academic commentators say policy responses are needed. Koskie and others propose extending media bargaining frameworks to cover AI intermediaries and urging AI developers to embed geographic sensitivity in system design so local reporting and the journalists behind it are not erased. The University of Sydney’s own initiatives to help researchers use generative AI responsibly underline the need for ethics, accountability and technical fixes if AI tools are to support, rather than hollow out, a plural news environment. (Sources: 1,2,5,3)
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The article was published on 24 January 2026, making it current. However, the study referenced was conducted in 2023, which may affect the relevance of the findings. ([theguardian.com](https://www.theguardian.com/media/2026/jan/25/ai-generated-news-summaries-microsoft-copilot-australian-journalism?utm_source=openai))
Quotes check
Score:
7
Notes:
Direct quotes from Dr. Timothy Koskie are used. While the article provides a link to the original study, the quotes cannot be independently verified without accessing the full study. ([theguardian.com](https://www.theguardian.com/media/2026/jan/25/ai-generated-news-summaries-microsoft-copilot-australian-journalism?utm_source=openai))
Source reliability
Score:
9
Notes:
The Guardian is a reputable news organisation. However, the article relies on a single source for the study’s findings, which may limit the breadth of information. ([theguardian.com](https://www.theguardian.com/media/2026/jan/25/ai-generated-news-summaries-microsoft-copilot-australian-journalism?utm_source=openai))
Plausability check
Score:
8
Notes:
The claims about AI-generated news summaries favouring US and European media over Australian sources are plausible, given the global dominance of major news outlets. However, the study’s methodology and sample size are not detailed, which raises questions about the generalisability of the findings. ([theguardian.com](https://www.theguardian.com/media/2026/jan/25/ai-generated-news-summaries-microsoft-copilot-australian-journalism?utm_source=openai))
Overall assessment
Verdict (FAIL, OPEN, PASS): OPEN
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article presents current information but relies heavily on a single source without independent verification. The study’s methodology and sample size are not detailed, raising questions about the generalisability of the findings. ([theguardian.com](https://www.theguardian.com/media/2026/jan/25/ai-generated-news-summaries-microsoft-copilot-australian-journalism?utm_source=openai))
