{"id":23021,"date":"2026-04-29T16:41:00","date_gmt":"2026-04-29T16:41:00","guid":{"rendered":"https:\/\/sawahsolutions.com\/alpha\/ai-generated-peer-review-reports-raise-concerns-over-transparency-and-accountability-in-scholarly-publishing\/"},"modified":"2026-04-29T16:45:48","modified_gmt":"2026-04-29T16:45:48","slug":"ai-generated-peer-review-reports-raise-concerns-over-transparency-and-accountability-in-scholarly-publishing","status":"publish","type":"post","link":"https:\/\/sawahsolutions.com\/alpha\/ai-generated-peer-review-reports-raise-concerns-over-transparency-and-accountability-in-scholarly-publishing\/","title":{"rendered":"AI-generated peer review reports raise concerns over transparency and accountability in scholarly publishing"},"content":{"rendered":"<p><\/p>\n<div>\n<p>A PhD student in the US identified AI-written referee reports, highlighting growing challenges in enforcing policies against generative AI use and maintaining review integrity in scholarly journals.<\/p>\n<\/div>\n<div>\n<p>A philosophy PhD student in the United States has described receiving a journal referee report that, on later inspection, appeared to have been written by an AI system rather than by a human reviewer. The case echoes an earlier warning from Daily Nous, which in 2024 highlighted a study suggesting that between 7% and 17% of sentences in computer science peer reviews may have been generated by large language models, pointing to a problem that appears to be spreading beyond one field.<\/p>\n<p>The student said the first referee\u2019s comments initially seemed thoughtful and constructive, but that another reader later flagged them as sounding machine-generated. After checking with AI detectors, the student said the text appeared highly likely to be fully AI-written. The concern is not just about tone or style. As the Daily Nous post argues, reviewers who upload manuscripts into chatbots may be breaching journal policies, author confidentiality and, in some cases, copyright and data-security obligations. Oxford Academic, for instance, says reviewers must not upload manuscripts or proposals into a generative AI tool for any purpose.<\/p>\n<p>There is also a sharper issue of responsibility. A journal editor asks a named scholar to assess a paper, not to delegate the task to software. On that logic, using AI to draft a report without disclosure can amount to passing off another entity\u2019s work as one\u2019s own. Elsevier says reviewers should not use generative AI or AI-assisted tools to assist with scientific review, and it further warns that reviewers should not upload their own reports into AI systems for polishing, because the report may itself contain confidential material. Taylor &amp; Francis takes a somewhat more permissive view, saying AI may be used to improve review language so long as the reviewer remains accountable for accuracy and integrity. Academic Medicine has likewise argued that any AI use in scholarly publishing needs transparency, editorial oversight and a firm commitment to confidentiality.<\/p>\n<p>Even so, the practical problem of enforcement remains unresolved. A study in Research Evaluation found that some publisher policies allowing limited AI-assisted polishing of referee reports are difficult to police, because current detectors can misclassify mixed human-AI text as wholly machine-generated. A separate arXiv preprint reached a similar conclusion, warning that existing detection tools are not reliable enough to identify AI use with confidence and that public estimates of AI-written peer review should therefore be treated cautiously. For now, the simplest rule may be the most defensible one: if you agree to referee a paper, you should do the work yourself.<\/p>\n<h3>Source Reference Map<\/h3>\n<p><strong>Inspired by headline at:<\/strong> <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/dailynous.com\/2026\/04\/29\/illicit-use-ai-philosophers-refereeing-journals\/\">[1]<\/a><\/sup><\/p>\n<p><strong>Sources by paragraph:<\/strong><\/p>\n<p>Source: <a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.noahwire.com\">Noah Wire Services<\/a><\/p>\n<\/p><\/div>\n<div>\n<h3 class=\"mt-0\">Noah Fact Check Pro<\/h3>\n<p class=\"text-sm sans\">The draft above was created using the information available at the time the story first<br \/>\n        emerged. We\u2019ve since applied our fact-checking process to the final narrative, based on the criteria listed<br \/>\n        below. The results are intended to help you assess the credibility of the piece and highlight any areas that may<br \/>\n        warrant further investigation.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Freshness check<\/h3>\n<p class=\"text-sm pt-0 sans\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0 sans\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The article was published today, April 29, 2026, and presents a recent case of AI-generated referee reports in philosophy journals, indicating high freshness.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Quotes check<\/h3>\n<p class=\"text-sm pt-0 sans\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>8<\/p>\n<p class=\"text-sm pt-0 sans\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The article includes direct quotes from a philosophy PhD student and references to AI detection tools. However, the specific AI detection tools used are not named, and the student&#8217;s identity is not disclosed, making independent verification challenging.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Source reliability<\/h3>\n<p class=\"text-sm pt-0 sans\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>7<\/p>\n<p class=\"text-sm pt-0 sans\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The article is published on Daily Nous, a reputable platform within the philosophy community. However, it is a niche publication, and the content is authored by Justin Weinberg, whose individual credibility is not independently verified.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Plausibility check<\/h3>\n<p class=\"text-sm pt-0 sans\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>9<\/p>\n<p class=\"text-sm pt-0 sans\"><span class=\"font-bold\">Notes:<br \/>\n    <\/span>The scenario described aligns with known concerns about AI&#8217;s role in academic peer review. Similar issues have been reported in other fields, such as computer science, where AI-generated content in reviews has been documented. However, the specific case in philosophy lacks independent verification.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Overall assessment<\/h3>\n<p class=\"text-sm pt-0 sans\"><span class=\"font-bold\">Verdict<\/span> (FAIL, OPEN, PASS): <span class=\"font-bold\">OPEN<\/span><\/p>\n<p class=\"text-sm pt-0 sans\"><span class=\"font-bold\">Confidence<\/span> (LOW, MEDIUM, HIGH): <span class=\"font-bold\">MEDIUM<\/span><\/p>\n<p class=\"text-sm mb-3 pt-0 sans\"><span class=\"font-bold\">Summary:<br \/>\n        <\/span>The article presents a recent case of alleged AI-generated referee reports in philosophy journals, citing policies from Elsevier and Oxford Academic on AI use in peer review. However, the primary source is an unverified account from a philosophy PhD student, and the specific AI detection tools used are not named, making independent verification challenging. Given these factors, the overall assessment is OPEN with medium confidence.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>A PhD student in the US identified AI-written referee reports, highlighting growing challenges in enforcing policies against generative AI use and maintaining review integrity in scholarly journals. A philosophy PhD student in the United States has described receiving a journal referee report that, on later inspection, appeared to have been written by an AI system<\/p>\n","protected":false},"author":1,"featured_media":23022,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[40],"tags":[],"class_list":{"0":"post-23021","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-london-news"},"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/posts\/23021","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/comments?post=23021"}],"version-history":[{"count":1,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/posts\/23021\/revisions"}],"predecessor-version":[{"id":23023,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/posts\/23021\/revisions\/23023"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/media\/23022"}],"wp:attachment":[{"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/media?parent=23021"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/categories?post=23021"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/tags?post=23021"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}