{"id":13285,"date":"2025-10-13T04:05:00","date_gmt":"2025-10-13T04:05:00","guid":{"rendered":"https:\/\/sawahsolutions.com\/lap\/reframing-ai-human-designed-systems-embedded-in-societal-power-structures\/"},"modified":"2025-10-13T09:59:27","modified_gmt":"2025-10-13T09:59:27","slug":"reframing-ai-human-designed-systems-embedded-in-societal-power-structures","status":"publish","type":"post","link":"https:\/\/sawahsolutions.com\/lap\/reframing-ai-human-designed-systems-embedded-in-societal-power-structures\/","title":{"rendered":"Reframing AI: human-designed systems embedded in societal power structures"},"content":{"rendered":"<p><\/p>\n<div>\n<p>A nuanced exploration reveals that AI is a human-centric technology, shaped by data, bias, and regulation, challenging notions of autonomous machines and emphasising responsible oversight.<\/p>\n<\/div>\n<div>\n<p>The term \u2018artificial intelligence\u2019 often conjures images of detached, autonomous machines operating independently from human society. Yet, a closer examination reveals that AI is far from an alien or independent entity; it is deeply entwined with human reality. From popular voice assistants like Siri to advanced educational tools powered by GPT, AI systems operate as extensions of human design, logic, and culture. For instance, Google Translate, which often appears to \u2018know\u2019 numerous languages, actually relies entirely on millions of human translations inputted into its system. Therefore, the \u2018artificial\u2019 label refers more to the method of creation than to the essence of AI itself. This distinction is crucial as it challenges the misconception that AI operates autonomously, instead underlining its role as a human-anchored amplification of our own power.<\/p>\n<p>This understanding has significant implications for how societies regulate and manage AI technologies. The European Union\u2019s AI Act, enacted in 2024, exemplifies this perspective by defining AI as software developed through human-designed techniques and emphasising human accountability. Such policy choices counter the myth of AI\u2019s autonomy and prioritise responsibility. Similarly, the United Nations Educational, Scientific and Cultural Organisation\u2019s (UNESCO) Recommendation on the Ethics of Artificial Intelligence, adopted in 2021, reiterates that AI is always \u201chuman-made and human-directed,\u201d framing it as a socio-technical system rather than an independent intelligence. The language we use to describe AI shapes the frameworks for governance, ethical oversight, and accountability, which are essential to mitigate AI\u2019s potential harms.<\/p>\n<p>Underlying all AI systems is human-generated data, sometimes described as AI\u2019s \u2018DNA.\u2019 Models like OpenAI\u2019s GPT or Anthropic\u2019s Claude do not \u2018think\u2019 but generate responses based on the vast datasets of human writing and behaviour they have ingested. Streaming services such as Spotify leverage users\u2019 listening habits to power recommendation algorithms. Consequently, AI is essentially a repository and mathematical model of human action, reflecting our behaviours at scale. However, this dependency introduces vulnerabilities. The Cambridge Analytica scandal, involving the misuse of Facebook data in 2016 to influence elections, exposed how AI could amplify political biases and manipulation. Closer to home, Sri Lanka witnessed the exacerbation of social media misinformation in the 2018 anti-Muslim riots, where unregulated algorithmic systems intensified hate speech. These examples demonstrate that AI is not detached; it is profoundly embedded in societal dynamics and human consequences.<\/p>\n<p>The often-assumed \u2018intelligence\u2019 of AI is, in fact, a sophisticated imitation. AI systems generate plausible outputs by recognising patterns and predicting likely sequences, rather than possessing true understanding or reasoning. IBM\u2019s Watson, which triumphed on the game show Jeopardy! in 2011, did so by matching clues to linguistic patterns, not through human-like reasoning. Similarly, large language models can draft coherent essays but lack comprehension of the ethical, legal, or emotional contexts involved. This distinction is vital as it warns against overreliance on AI outputs. For example, in 2023 a US lawyer was sanctioned for submitting a legal brief containing fabricated case references generated by ChatGPT, which statistically mimicked legal citations without discerning truth. Earlier, Microsoft\u2019s Tay chatbot demonstrated how AI can replicate harmful human biases, quickly producing racist content after exposure to toxic user interactions. Recognising that AI simulates rather than understands human expression is crucial to avoid misplaced trust and accountability.<\/p>\n<p>Bias remains a persistent problem with AI, often replicating and amplifying societal prejudices. The COMPAS algorithm in the United States, intended to predict criminal recidivism, disproportionately labeled Black defendants as high risk while underestimating risks for white defendants due to biased historical data. Hiring algorithms, such as Amazon\u2019s failed experiment in 2018, have similarly discriminated against women applicants. These issues are not confined to Western contexts. In India, Aadhaar-linked biometric systems have excluded rural and poor populations from vital public services. In Sri Lanka, the use of facial recognition technologies raises concerns about underrepresentation of darker-skinned individuals. These instances underscore that AI reflects structural biases embedded in human data rather than being neutral arbiters. Addressing bias ethically demands recognising its deep roots in social structures.<\/p>\n<p>The question of AI\u2019s creativity has provoked debate as systems like DALL\u00b7E and MidJourney generate images and music imitating famous artists such as Picasso or Mozart. While these creations appear original, they are statistical recombinations of pre-existing works without intentionality or emotional input, offering imitation at scale rather than genuine creativity. The rise of AI-generated art has sparked controversies regarding fairness and authorship; notably, in 2023 a Colorado art competition awarded first place to an AI-generated image, igniting discussions on human versus machine creativity. Copyright debates have also emerged, with the US Copyright Office ruling that AI creations lacking human input do not qualify for copyright protection, underscoring the primacy of human agency in authorship. AI thus challenges conventional notions of creativity, compelling society to reassess how we value human and machine outputs.<\/p>\n<p>Despite narratives of autonomy, AI systems rely heavily on human oversight and intervention. Self-driving vehicles tested by Tesla and Waymo still require human supervision, updating, and retraining to function safely. Investigations into fatal accidents involving Tesla\u2019s autopilot highlight how human monitoring and improved safety measures are critical. AI models also suffer from \u2018model drift\u2019 as they become less accurate over time without fresh data and human recalibration. Regulatory frameworks like the EU AI Act ban harmful uses such as social scoring systems, recognising the dangers of unchecked AI surveillance. Conversely, countries like Sri Lanka lack comprehensive AI governance, leaving their populations vulnerable to misuse, especially in sensitive areas like elections and public security. These realities attest that AI\u2019s future hinges on human decisions, ethical guidelines, and regulation rather than on autonomous machine evolution.<\/p>\n<p>In essence, AI is far less \u2018artificial\u2019 than commonly perceived. It is a mirror reflecting human data, ethics, and societal structures, extending human capabilities computationally rather than replacing them. Its flaws\u2014bias, imitation, deterioration\u2014are magnifications of human limitations, while its capabilities are human achievements realised at scale. The critical challenge lies in shaping AI responsibly to prevent entrenching existing inequalities and to ensure it benefits all of humanity. While bodies like the EU and UNESCO have pioneered frameworks emphasising human rights, dignity, and accountability, many developing countries, including Sri Lanka, remain without comprehensive policies. Reframing AI as a socio-technical system with human accountability at its core is vital. Ultimately, AI\u2019s trajectory will be shaped not by machines themselves but by the governance, ethics, and laws humanity enacts.<\/p>\n<h3>\ud83d\udccc Reference Map:<\/h3>\n<p>Source: <a href=\"https:\/\/www.noahwire.com\" rel=\"nofollow noopener\" target=\"_blank\">Noah Wire Services<\/a><\/p>\n<\/p><\/div>\n<div>\n<h3 class=\"mt-0\">Noah Fact Check Pro<\/h3>\n<p class=\"text-sm\">The draft above was created using the information available at the time the story first<br \/>\n        emerged. We\u2019ve since applied our fact-checking process to the final narrative, based on the criteria listed<br \/>\n        below. The results are intended to help you assess the credibility of the piece and highlight any areas that may<br \/>\n        warrant further investigation.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Freshness check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>8<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The narrative presents recent developments, including the 2024 European Union AI Act and the 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence. The earliest known publication date of similar content is 2021, with the most recent being 2024. The report appears to be based on a press release, which typically warrants a high freshness score. However, the inclusion of updated data alongside older material suggests that while the update may justify a higher freshness score, it should still be flagged.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Quotes check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>7<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The report includes direct quotes from the 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence. The earliest known usage of these quotes is from 2021. The identical wording in earlier material indicates potential reuse of content. Variations in quote wording are not present.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Source reliability<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>6<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n        <\/span>The narrative originates from The Morning, a news outlet based in Sri Lanka. While it is a reputable organisation, its focus on local news may limit its international reach and recognition. The report mentions the 2024 European Union AI Act and the 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence, both of which are verifiable and reputable sources.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Plausability check<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Score:<br \/>\n        <\/span>8<\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Notes:<br \/>\n    <\/span>The claims made in the report are plausible and align with known developments in AI governance, such as the 2024 European Union AI Act and the 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence. The report also references the 2016 Cambridge Analytica scandal and the 2018 anti-Muslim riots in Sri Lanka, both of which are well-documented events. The language and tone are consistent with the region and topic. There is no excessive or off-topic detail unrelated to the claim. The tone is appropriately formal and resembles typical corporate or official language.<\/p>\n<h3 class=\"mt-3 mb-1 font-semibold text-base\">Overall assessment<\/h3>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Verdict<\/span> (FAIL, OPEN, PASS): <span class=\"font-bold\">PASS<\/span><\/p>\n<p class=\"text-sm pt-0\"><span class=\"font-bold\">Confidence<\/span> (LOW, MEDIUM, HIGH): <span class=\"font-bold\">MEDIUM<\/span><\/p>\n<p class=\"text-sm mb-3 pt-0\"><span class=\"font-bold\">Summary:<br \/>\n        <\/span>The report presents a coherent and plausible narrative on AI&#8217;s human-centric perspective, referencing recent developments and verifiable events. While the freshness score is slightly reduced due to the inclusion of older material, the overall assessment is positive. The source&#8217;s reliability is moderate, and the plausibility of the claims is high.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>A nuanced exploration reveals that AI is a human-centric technology, shaped by data, bias, and regulation, challenging notions of autonomous machines and emphasising responsible oversight. The term \u2018artificial intelligence\u2019 often conjures images of detached, autonomous machines operating independently from human society. Yet, a closer examination reveals that AI is far from an alien or independent<\/p>\n","protected":false},"author":1,"featured_media":13286,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[40],"tags":[],"class_list":{"0":"post-13285","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-london-news"},"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts\/13285","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/comments?post=13285"}],"version-history":[{"count":1,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts\/13285\/revisions"}],"predecessor-version":[{"id":13287,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/posts\/13285\/revisions\/13287"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/media\/13286"}],"wp:attachment":[{"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/media?parent=13285"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/categories?post=13285"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sawahsolutions.com\/lap\/wp-json\/wp\/v2\/tags?post=13285"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}