Generating key takeaways...

As AI becomes embedded in daily life, educators emphasise human-centred teaching, ethical evaluation, and critical thinking to ensure responsible integration and societal benefit.

Artificial intelligence is no longer a future prospect, but its social rules are still being written. In that unsettled space, the sharpest disagreements may be the most valuable. The real divide is not simply between enthusiasm and hostility, but between people who want AI to move quickly and those who believe resistance is necessary to slow it down, question it and shape it before it hardens into everyday infrastructure.

That tension matters because AI is already embedded in classrooms, workplaces and university systems. University of Minnesota computer science and engineering associate professor Dan Knights said he has recently stopped writing code himself in some settings because AI now performs much of that work. He is preparing a programming lab for next semester that will focus on AI-assisted coding, but he says the deeper purpose is discussion rather than instruction in a single tool.

Knights’ view aligns with UNESCO’s approach to AI literacy, which stresses public understanding, transparency, accountability, privacy, safety and human oversight. UNESCO has also argued that education should remain human-centred, warning against over-reliance on automated systems and the erosion of critical thinking. In that spirit, teaching AI is not just about what the software can do; it is about helping students recognise the assumptions, limits and trade-offs built into it.

University of Minnesota advertising associate professor Claire Segijn takes a similar line. She says students should not be trained only on one platform or one workflow, but given a broader framework for approaching new technologies as they emerge. That framework, she argues, should include questions about copyright, environmental damage, labour, bias, privacy and digital vulnerability, alongside the habit of asking who benefits and who bears the cost.

Segijn has built a classroom guide she calls S.M.AI.R.T.E.R, a rubric meant to help students evaluate AI use more carefully, and she says AI itself helped shape the acronym. Her point, like Knights’, is not that everyone must embrace these systems. It is that even sceptics remain part of the ethical conversation. As UNESCO and academic research on AI in education both suggest, responsible use depends on human judgement, open discussion and clear limits. If AI is still an infant, then the question is not only who uses it, but who teaches it how to behave.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
10

Notes:
The article was published on April 15, 2026, making it current and original. No evidence of prior publication or recycled content was found. The narrative appears fresh and unique.

Quotes check

Score:
10

Notes:
The article includes direct quotes from University of Minnesota professors Dan Knights and Claire Segijn. Searches for these quotes did not reveal any earlier usage, indicating they are original to this piece.

Source reliability

Score:
8

Notes:
The article is published by The Minnesota Daily, the student newspaper of the University of Minnesota. While it is a student-run publication, it is affiliated with a reputable institution and provides citations to external sources, enhancing its credibility. However, as a student publication, it may have limitations in editorial oversight compared to professional outlets.

Plausibility check

Score:
9

Notes:
The claims made in the article align with known information about AI integration in education and the perspectives of the cited professors. The article provides specific details, such as the development of AI-related courses and frameworks like S.M.AI.R.T.E.R, which are verifiable through the professors’ professional profiles and university announcements. No inconsistencies or implausible elements were identified.

Overall assessment

Verdict (FAIL, OPEN, PASS): PASS

Confidence (LOW, MEDIUM, HIGH): MEDIUM

Summary:
The article is current, original, and provides verifiable information from reputable sources. However, as a student-run publication, it may have limitations in editorial oversight compared to professional outlets. The reliance on internal university sources may also limit the diversity of perspectives and independent verification. While the content is plausible and the quotes are original, the overall assessment is made with medium confidence due to these considerations.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2026 Engage365. All Rights Reserved.
Exit mobile version