{"id":20888,"date":"2026-01-16T15:03:00","date_gmt":"2026-01-16T15:03:00","guid":{"rendered":"https:\/\/sawahsolutions.com\/lap\/jaron-lanier-warns-unaccountable-ai-risks-societal-collapse-and-calls-for-human-centric-accountability\/"},"modified":"2026-01-16T15:32:58","modified_gmt":"2026-01-16T15:32:58","slug":"jaron-lanier-warns-unaccountable-ai-risks-societal-collapse-and-calls-for-human-centric-accountability","status":"publish","type":"post","link":"https:\/\/sawahsolutions.com\/lap\/jaron-lanier-warns-unaccountable-ai-risks-societal-collapse-and-calls-for-human-centric-accountability\/","title":{"rendered":"Jaron Lanier warns unaccountable AI risks societal collapse and calls for human-centric accountability"},"content":{"rendered":"<p><\/p>\n<div>\n<p>The pioneering technologist Jaron Lanier issues a stark warning on the dangers of opaque AI systems, emphasising the need for human accountability, transparency, and equitable data practices to safeguard societal stability.<\/p>\n<\/div>\n<div>\n<p>Jaron Lanier, the technologist widely credited with founding the field of virtual reality, has renewed a stark caution about artificial intelligence: without human accountability, he says, \u201cSociety cannot function if no one is accountable for AI.\u201d Speaking on the podcast \u201cThe Ten Reckonings\u201d in conversation with Dr. Ben Goertzel, Lanier argued that opaque systems and diffuse responsibility risk eroding public trust and destabilising democratic institutions. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.webpronews.com\/jaron-lanier-warns-unaccountable-ai-risks-societal-collapse\/\">[1]<\/a><\/sup><\/p>\n<p>Lanier\u2019s warning builds on a long-running critique of the industry\u2019s treatment of AI as if it were an autonomous actor rather than an assemblage of human contributions. According to his essay in The New Yorker, \u201cThere Is No A.I.,\u201d and subsequent public talks, he frames large models as a form of social collaboration and stresses the need for \u201cdata dignity\u201d , a system in which individuals who supply the data that trains AI are acknowledged and compensated. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.newyorker.com\/science\/annals-of-artificial-intelligence\/there-is-no-ai\">[2]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/cdss.berkeley.edu\/video\/data-dignity-and-inversion-ai-jaron-lanier\">[6]<\/a><\/sup><\/p>\n<p>That historical perspective informs his present concern that anthropomorphising machines can distract from the real problem: the humans who design, deploy and profit from them. In a 2023 interview with The Guardian he warned, \u201cThe danger isn\u2019t that AI destroys us. It\u2019s that it drives us insane,\u201d arguing that systems optimised for engagement can warp information ecosystems and human behaviour. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2023\/mar\/23\/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane\">[3]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.webpronews.com\/jaron-lanier-warns-unaccountable-ai-risks-societal-collapse\/\">[1]<\/a><\/sup><\/p>\n<p>Lanier\u2019s current interventions emphasise practical remedies. In Berkeley talks and at industry events he has advocated \u201cinversion\u201d models that place people at the centre of AI systems, accompanied by provenance calculations that make outputs traceable to specific inputs. According to the UC Berkeley Centre for Data Science, such provenance could help address safety, fairness and alignment by revealing which human contributions shaped a result. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/cdss.berkeley.edu\/video\/data-dignity-and-inversion-ai-jaron-lanier\">[6]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.webpronews.com\/jaron-lanier-warns-unaccountable-ai-risks-societal-collapse\/\">[1]<\/a><\/sup><\/p>\n<p>The accountability gap is visible across sectors, Lanier told listeners: from finance to healthcare, algorithmic errors or biases can produce tangible harms without clear legal recourse. Industry commentators and online discussants echo this point, arguing that concentrated platform power and undisclosed training practices enable \u201csupra-legal\u201d outcomes unless governance, transparency and authorship rules are imposed. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.webpronews.com\/jaron-lanier-warns-unaccountable-ai-risks-societal-collapse\/\">[1]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2022\/nov\/27\/jaron-lanier-tech-threat-humanity-twitter-social-media\">[5]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2023\/mar\/23\/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane\">[3]<\/a><\/sup><\/p>\n<p>Empathy, Lanier insists, should be reoriented. Rather than extending moral standing to non-sentient systems, he argues for empathy directed at the people affected by AI\u2019s failures , the data contributors, users and communities who bear the social costs. Recent academic work supports the idea that social skills and human-centred design improve human\u2013AI collaboration, reinforcing Lanier\u2019s focus on oversight and humane system design. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.webpronews.com\/jaron-lanier-warns-unaccountable-ai-risks-societal-collapse\/\">[1]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/cdss.berkeley.edu\/video\/data-dignity-and-inversion-ai-jaron-lanier\">[6]<\/a><\/sup><\/p>\n<p>Not all industry leaders share his apocalyptic framing. As debates continue, some executives stress the practical gains from AI and caution against alarmism. According to a recent TechRadar report, Nvidia\u2019s chief executive has downplayed the notion of \u201cgod-like\u201d AI, urging attention to accountable, efficiency-enhancing applications. Lanier\u2019s reply is not to deny progress but to insist that innovation must be tethered to traceability and equitable distribution of benefits. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.webpronews.com\/jaron-lanier-warns-unaccountable-ai-risks-societal-collapse\/\">[1]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.jaronlanier.com\/index.html\">[7]<\/a><\/sup><\/p>\n<p>Lanier\u2019s prescription combines regulatory pressure, technical mechanisms and economic redesign: enforceable accountability for deployers, provenance and auditability of model outputs, and compensation frameworks for data contributors. He has repeatedly framed these reforms as necessary to avert the social disintegration he fears if AI\u2019s human authors remain unaccountable. Advocates on platforms such as X and technical forums have amplified the call for auditable systems and clear authorship, while ethicists underscore the epistemic risks posed by opaque models. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.webpronews.com\/jaron-lanier-warns-unaccountable-ai-risks-societal-collapse\/\">[1]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.newyorker.com\/science\/annals-of-artificial-intelligence\/there-is-no-ai\">[2]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2023\/mar\/23\/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane\">[3]<\/a><\/sup><\/p>\n<p>If Lanier\u2019s plea finds policy purchase, it would reshape conversations about who benefits from AI and who is held responsible when it harms. According to his writings and recent public remarks, the goal is not to halt technological progress but to ensure it is organised around human dignity and transparent accountability so that AI enriches rather than undermines society. <sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.newyorker.com\/science\/annals-of-artificial-intelligence\/there-is-no-ai\">[2]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/cdss.berkeley.edu\/video\/data-dignity-and-inversion-ai-jaron-lanier\">[6]<\/a><\/sup><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.webpronews.com\/jaron-lanier-warns-unaccountable-ai-risks-societal-collapse\/\">[1]<\/a><\/sup><\/p>\n<h3>\ud83d\udccc Reference Map:<\/h3>\n<p>##Reference Map:<\/p>\n<ul>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.webpronews.com\/jaron-lanier-warns-unaccountable-ai-risks-societal-collapse\/\">[1]<\/a><\/sup> (WebProNews) &#8211; Paragraph 1, Paragraph 3, Paragraph 5, Paragraph 8, Paragraph 9<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.newyorker.com\/science\/annals-of-artificial-intelligence\/there-is-no-ai\">[2]<\/a><\/sup> (The New Yorker) &#8211; Paragraph 2, Paragraph 9<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2023\/mar\/23\/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane\">[3]<\/a><\/sup> (The Guardian) &#8211; Paragraph 3, Paragraph 5, Paragraph 9<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/cdss.berkeley.edu\/video\/data-dignity-and-inversion-ai-jaron-lanier\">[6]<\/a><\/sup> (UC Berkeley Centre for Data Science) &#8211; Paragraph 4, Paragraph 6, Paragraph 9<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.theguardian.com\/technology\/2022\/nov\/27\/jaron-lanier-tech-threat-humanity-twitter-social-media\">[5]<\/a><\/sup> (The Guardian 2022) &#8211; Paragraph 5<\/li>\n<li><sup><a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.jaronlanier.com\/index.html\">[7]<\/a><\/sup> (JaronLanier.com\/TechRadar reference) &#8211; Paragraph 8<\/li>\n<\/ul>\n<p>Source: <a target=\"_blank\" rel=\"nofollow noopener noreferrer\" href=\"https:\/\/www.noahwire.com\">Noah Wire Services<\/a><\/p>\n<\/p><\/div>\n<div>\n<h3 class=\"mt-0\">Noah Fact Check Pro<\/h3>\n<p class=\"text-sm\">The draft above was created using the information available at the time the story first<br \/>\n        emerged. We\u2019ve since applied our fact-checking process to the final narrative, based on the criteria listed<br \/>\n        below. The results are intended to help you assess the credibility of the piece and highlight any areas that may<br \/>\n        warrant further investigation.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Freshness check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>5<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The article was published on January 15, 2026, referencing a podcast episode from January 14, 2026. However, similar discussions by Jaron Lanier on AI accountability have appeared in earlier publications, such as a March 2023 interview with The Guardian ([theguardian.com](https:\/\/www.theguardian.com\/technology\/2023\/mar\/23\/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane?utm_source=openai)). This suggests that the content may not be entirely original, potentially reducing its freshness score.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Quotes check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>6<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The article includes direct quotes from Jaron Lanier, such as &#8220;Society cannot function if no one is accountable for AI.&#8221; While these quotes are attributed, they have appeared in previous sources, including the March 2023 interview with The Guardian ([theguardian.com](https:\/\/www.theguardian.com\/technology\/2023\/mar\/23\/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane?utm_source=openai)). This raises concerns about the originality of the content.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Source reliability<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>4<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The primary source, WebProNews, is a lesser-known publication. The article also references reputable sources like The Guardian and UC Berkeley Centre for Data Science. However, the reliance on a niche source and the potential recycling of content from more established outlets diminishes the overall reliability score.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Plausability check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>7<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n    <\/span>The claims made in the article align with Jaron Lanier&#8217;s known views on AI and accountability. However, the repetition of previously published quotes and ideas without new supporting details or developments raises questions about the novelty and depth of the reporting.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Overall assessment<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Verdict<\/span> (FAIL, OPEN, PASS): <span class=\"font-bold\">FAIL<\/span><\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Confidence<\/span> (LOW, MEDIUM, HIGH): <span class=\"font-bold\">MEDIUM<\/span><\/p>\n<p class=\"text-sm mb-3 pt-0\"><span class=\"font-bold\">Summary:<br \/>\n        <\/span>The article presents content that appears to be recycled from earlier publications, including direct quotes from Jaron Lanier that have appeared in previous sources. The reliance on a lesser-known publication and the lack of new supporting details or developments diminish the overall credibility of the piece. Given these concerns, the content does not meet the necessary standards for publication under our editorial indemnity.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>The pioneering technologist Jaron Lanier issues a stark warning on the dangers of opaque AI systems, emphasising the need for human accountability, transparency, and equitable data practices to safeguard societal stability. Jaron Lanier, the technologist widely credited with founding the field of virtual reality, has renewed a stark caution about artificial intelligence: without human accountability,<\/p>\n","protected":false},"author":1,"featured_media":20889,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[40],"tags":[],"class_list":{"0":"post-20888","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-london-news"},"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts\/20888","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/comments?post=20888"}],"version-history":[{"count":1,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts\/20888\/revisions"}],"predecessor-version":[{"id":20890,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts\/20888\/revisions\/20890"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/media\/20889"}],"wp:attachment":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/media?parent=20888"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/categories?post=20888"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/tags?post=20888"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}