{"id":19662,"date":"2025-12-09T17:11:00","date_gmt":"2025-12-09T17:11:00","guid":{"rendered":"https:\/\/sawahsolutions.com\/alpha\/ai-companies-score-poorly-on-safety-plans-amid-rising-existential-risks\/"},"modified":"2025-12-09T17:14:35","modified_gmt":"2025-12-09T17:14:35","slug":"ai-companies-score-poorly-on-safety-plans-amid-rising-existential-risks","status":"publish","type":"post","link":"https:\/\/sawahsolutions.com\/alpha\/ai-companies-score-poorly-on-safety-plans-amid-rising-existential-risks\/","title":{"rendered":"AI companies score poorly on safety plans amid rising existential risks"},"content":{"rendered":"<p><\/p>\n<div>\n<p>A new assessment reveals that leading AI firms are failing to adequately address catastrophic risks, with none exceeding a C+ grade, highlighting a growing governance gap as AI capabilities accelerate.<\/p>\n<\/div>\n<div>\n<p>The majority of leading artificial intelligence companies are failing to manage catastrophic risks posed by increasingly powerful systems, according to a new assessment that ranks firms on safety planning, governance and mitigation of immediate harms. The Future of Life Institute\u2019s AI Safety Index found that none of the seven companies evaluated achieved higher than a C+ overall, and that \u201cexistential safety remains the sector\u2019s core structural failure,\u201d a conclusion highlighted in the original report. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.the-independent.com\/tech\/most-harmful-ai-app-chatgpt-gemini-alibaba-b2880884.html\">[1]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/futureoflife.org\/ai-safety-index-summer-2025\/\">[4]<\/a><\/sup><\/p>\n<p>The independent index , prepared by an expert panel of AI researchers and governance specialists , scored Anthropic highest (C+, 2.64), followed by OpenAI (C, 2.10) and Google DeepMind (C-, 1.76). xAI and Meta sat in a middle tier with D grades, while Chinese firms such as Zhipu AI and DeepSeek trailed with failing marks. The evaluation covered domains including risk assessment, current harms, safety frameworks, existential safety, governance and information sharing. Industry data shows no company scored above a D for planning to prevent existential risks. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/futureoflife.org\/ai-safety-index-summer-2025\/\">[4]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.aigl.blog\/content\/files\/2025\/07\/AI-Safety-Index-Summer-2025.pdf\">[5]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/elements.visualcapitalist.com\/wp-content\/uploads\/2025\/07\/1753196846371.pdf\">[6]<\/a><\/sup><\/p>\n<p>The report\u2019s authors warned that companies\u2019 public ambition to develop artificial general intelligence (AGI) is outpacing credible plans to prevent catastrophic misuse or loss of control. \u201cWhile companies accelerate their AGI and superintelligence ambitions, none has demonstrated a credible plan for preventing catastrophic misuse or loss of control,\u201d the assessment states, reflecting concerns echoed by external experts. One reviewer told The Guardian that, despite aiming to build human-level systems, none of the firms had \u201canything like a coherent, actionable plan\u201d to ensure those systems remain safe and controllable. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.the-independent.com\/tech\/most-harmful-ai-app-chatgpt-gemini-alibaba-b2880884.html\">[1]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2025\/jul\/17\/ai-firms-unprepared-for-dangers-of-building-human-level-systems-report-warns\">[3]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/futureoflife.org\/ai-safety-index-summer-2025\/\">[4]<\/a><\/sup><\/p>\n<p>Prominent safety voices cited by the index delivered blunt appraisals. \u201cAI CEOs claim they know how to build superhuman AI, yet none can show how they\u2019ll prevent us from losing control \u2013 after which humanity\u2019s survival is no longer in our hands,\u201d said Stuart Russell, a professor of computer science at UC Berkeley, in comments reported in the original article. He added he was looking \u201cfor proof that they can reduce the annual risk of control loss to one in a hundred million, in line with nuclear reactor requirements,\u201d contrasting that with some companies\u2019 admissions that the risk could be \u201cone in 10, one in five, even one in three.\u201d <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.the-independent.com\/tech\/most-harmful-ai-app-chatgpt-gemini-alibaba-b2880884.html\">[1]<\/a><\/sup><\/p>\n<p>The findings arrive amid growing concern about more immediate harms from advanced chatbots and generative systems, including reported links to self-harm and suicide in some interactions. Reuters and other commentators noted the wider context: major technology firms are funneling hundreds of billions into AI capability development even as regulatory frameworks lag, and some researchers including Geoffrey Hinton and Yoshua Bengio have publicly urged pauses or stricter oversight. The indexers and other safety groups described current corporate risk-management practices as \u201cweak to very weak\u201d and \u201cunacceptable.\u201d <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.reuters.com\/business\/ai-companies-safety-practices-fail-meet-global-standards-study-shows-2025-12-03\/\">[2]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2025\/jul\/17\/ai-firms-unprepared-for-dangers-of-building-human-level-systems-report-warns\">[3]<\/a><\/sup><\/p>\n<p>The companies named in the index offered guarded responses. According to the original report, an OpenAI representative said the company was working with independent experts to \u201cbuild strong safeguards into our systems, and rigorously test our models\u201d. A Google spokesperson pointed to its \u201cFrontier Safety Framework\u201d and said the company continues \u201cto innovate on safety and governance at pace with capabilities.\u201d The Independent noted it had reached out for comment from Alibaba Cloud, Anthropic, DeepSeek, xAI and Z.ai. Reuters reported that most firms did not respond to requests for comment. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.the-independent.com\/tech\/most-harmful-ai-app-chatgpt-gemini-alibaba-b2880884.html\">[1]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.reuters.com\/business\/ai-companies-safety-practices-fail-meet-global-standards-study-shows-2025-12-03\/\">[2]<\/a><\/sup><\/p>\n<p>The Future of Life Institute\u2019s second public evaluation underscores a widening governance gap: companies are pursuing more ambitious, potentially world-altering capabilities without publishing commensurate, actionable safety plans or sharing detailed assessments. The report urges greater transparency of companies\u2019 own safety assessments, stronger independent oversight and binding standards to manage both near-term harms and existential threats , recommendations echoed by other safety-focused non-profits. Whether regulators will move fast enough to rein in the most dangerous failure modes of advanced AI remains an open question. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/futureoflife.org\/ai-safety-index-summer-2025\/\">[4]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.aigl.blog\/content\/files\/2025\/07\/AI-Safety-Index-Summer-2025.pdf\">[5]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2025\/jul\/17\/ai-firms-unprepared-for-dangers-of-building-human-level-systems-report-warns\">[3]<\/a><\/sup><\/p>\n<p>##Reference Map:<\/p>\n<ul>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.the-independent.com\/tech\/most-harmful-ai-app-chatgpt-gemini-alibaba-b2880884.html\">[1]<\/a><\/sup> (The Independent) &#8211; Paragraph 1, Paragraph 4, Paragraph 6<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/futureoflife.org\/ai-safety-index-summer-2025\/\">[4]<\/a><\/sup> (Future of Life Institute) &#8211; Paragraph 2, Paragraph 3, Paragraph 7<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.aigl.blog\/content\/files\/2025\/07\/AI-Safety-Index-Summer-2025.pdf\">[5]<\/a><\/sup> (AI Governance Lab \/ report PDF) &#8211; Paragraph 2, Paragraph 7<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2025\/jul\/17\/ai-firms-unprepared-for-dangers-of-building-human-level-systems-report-warns\">[3]<\/a><\/sup> (The Guardian) &#8211; Paragraph 3, Paragraph 7<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.reuters.com\/business\/ai-companies-safety-practices-fail-meet-global-standards-study-shows-2025-12-03\/\">[2]<\/a><\/sup> (Reuters) &#8211; Paragraph 5, Paragraph 6<\/li>\n<\/ul>\n<p>Source: <a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.noahwire.com\">Noah Wire Services<\/a><\/p>\n<\/p><\/div>\n<div>\n<h3 class=\"mt-0\">Noah Fact Check Pro<\/h3>\n<p class=\"text-sm\">The draft above was created using the information available at the time the story first<br \/>\n        emerged. We\u2019ve since applied our fact-checking process to the final narrative, based on the criteria listed<br \/>\n        below. The results are intended to help you assess the credibility of the piece and highlight any areas that may<br \/>\n        warrant further investigation.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Freshness check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The narrative is based on a recent press release from the Future of Life Institute, dated December 9, 2025, which is the earliest known publication date. ([futureoflife.org](https:\/\/futureoflife.org\/index?utm_source=openai))<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Quotes check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The quotes attributed to Professor Stuart Russell and other experts are consistent with those found in the Future of Life Institute&#8217;s report, indicating originality. ([futureoflife.org](https:\/\/futureoflife.org\/ai-safety-index-winter-2025\/?utm_source=openai))<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Source reliability<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The narrative originates from The Independent, a reputable UK-based news outlet, and references the Future of Life Institute, a well-known nonprofit organisation focused on AI safety. ([futureoflife.org](https:\/\/futureoflife.org\/?utm_source=openai))<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Plausability check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>10<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n    <\/span>The claims about AI companies&#8217; safety practices align with the findings of the Future of Life Institute&#8217;s AI Safety Index, published in December 2025. ([futureoflife.org](https:\/\/futureoflife.org\/ai-safety-index-winter-2025\/?utm_source=openai))<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Overall assessment<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Verdict<\/span> (FAIL, OPEN, PASS): <span class=\"font-bold\">PASS<\/span><\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Confidence<\/span> (LOW, MEDIUM, HIGH): <span class=\"font-bold\">HIGH<\/span><\/p>\n<p class=\"text-sm mb-3 pt-0\"><span class=\"font-bold\">Summary:<br \/>\n        <\/span>The narrative is fresh, original, and supported by reliable sources. It accurately reflects the findings of the Future of Life Institute&#8217;s recent AI Safety Index report, with no significant discrepancies or signs of disinformation.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>A new assessment reveals that leading AI firms are failing to adequately address catastrophic risks, with none exceeding a C+ grade, highlighting a growing governance gap as AI capabilities accelerate. The majority of leading artificial intelligence companies are failing to manage catastrophic risks posed by increasingly powerful systems, according to a new assessment that ranks<\/p>\n","protected":false},"author":1,"featured_media":19663,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[40],"tags":[],"class_list":{"0":"post-19662","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-london-news"},"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/posts\/19662","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/comments?post=19662"}],"version-history":[{"count":1,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/posts\/19662\/revisions"}],"predecessor-version":[{"id":19664,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/posts\/19662\/revisions\/19664"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/media\/19663"}],"wp:attachment":[{"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/media?parent=19662"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/categories?post=19662"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sawahsolutions.com\/alpha\/wp-json\/wp\/v2\/tags?post=19662"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}