Three YouTube content creators have filed a class-action lawsuit against Apple, alleging the tech giant scraped their videos without consent to train its artificial intelligence systems, raising legal questions over data use and copyright protections in AI development. Apple has been hit with a proposed class-action lawsuit by three YouTube creators who allege the company harvested their videos without consent to train its artificial intelligence systems. According to legal filings reported by 9to5Mac and other outlets, the complaint claims Apple downloaded and used creators’ clips for research and model development without permission, payment or attribution. [2],[3] The plaintiffs include well-known…
Client Brief
Clients wanted a mixture of London news for an audience of London commuter readers. The goal was to deliver timely, engaging, and location-relevant stories that resonate with busy professionals on the go. Content had to maintain journalistic quality while being optimized for mobile consumption and AI-driven syndication via NoahWire’s advanced article generation platform.
London News
Rapid adoption of artificial intelligence across Nigerian companies is raising concerns over governance gaps, with regulators and industry experts calling for urgent oversight measures to mitigate operational, legal, and reputational risks amid accelerating technological integration. In boardrooms across Nigeria, talk of “transformation” has become routine while the concrete implications of…
As agentic AI systems gain autonomy across business and personal tasks, industry experts highlight practical deployment challenges, security risks, and the evolving regulatory landscape at a Shanghai forum. The rise of agentic AI is shifting the tech landscape from laboratory experiments to systems that can act autonomously across business and…
The United States v. Heppner decision highlights how courts are applying traditional confidentiality doctrines to generative AI interactions, prompting legal practitioners to reassess privacy and discovery protocols amid technological advances. Courts are beginning to confront how generative artificial intelligence intersects with long‑standing confidentiality doctrines, a dynamic brought into sharp relief…
The Upper Grand District School Board is set to permit selected generative artificial intelligence tools for student use, emphasizing AI literacy, responsible integration, and safeguarding human rights amidst ongoing concerns about bias and privacy. Certain generative artificial intelligence tools will be permitted for student use across the Upper Grand District…
The House of Lords Communications and Digital Committee calls for a licensing-first approach to protect UK creators from uncredited use of their works in AI training, positioning the UK as a leader in responsible AI development amid mounting industry concerns. The House of Lords Communications and Digital Committee published a…
Apple encounters twin lawsuits from developers and YouTube creators over AI app takedowns and unauthorised data scraping, spotlighting industry tensions around content rights and AI training practices. Apple has been drawn into a legal crossfire over its use and moderation of artificial intelligence, facing two separate lawsuits that highlight competing…
Supercharge Your Content Strategy
Feel free to test this content on your social media sites to see whether it works for your community. Discover how AI-powered content can elevate your brand across social media and digital platforms. Try it risk-free and see the impact on your audience engagement.
The clash between Anthropic and the Pentagon highlights the emerging battle over AI’s role in military and domestic surveillance, raising urgent legal and ethical questions about privacy and government power in the age of advanced artificial intelligence. The Pentagon’s recent clash with Anthropic, the maker of the Claude chatbot, has…
Anthropic’s refusal to permit its AI models for autonomous weapons and mass surveillance has led the US Department of Defense to designate it a ‘supply chain risk’, sparking legal battles, industry fallout, and a broader debate over ethical AI use in national security. Anthropic, once a relatively low-profile contender in…
Labour groups representing 700,000 US technology workers have called on Amazon, Google, and Microsoft to resist military and surveillance demands to weaken AI safety measures, amid growing internal dissent over defence-related AI projects. Technology-sector labour groups and worker organisations representing roughly 700,000 employees across the United States have urged senior…
A high-stakes confrontation between AI firms and US defence officials highlights the growing tension over ethical boundaries and military access, risking broader implications for AI regulation and innovation. The sudden clash between the Pentagon and one of the fastest‑rising AI labs has laid bare a widening rift over how far…
A tragic incident in India has ignited renewed scrutiny of social media’s impact on adolescent mental health, amid concerns over addictive platform features and regulatory gaps. Experts warn that layered interventions involving technology firms, families, and policymakers are essential to address growing risks of self-harm and depression among young people.…
A diverse coalition has introduced a detailed framework advocating for stricter oversight, safety measures, and human control in AI development, signalling a significant shift in the global governance approach amid mounting concerns over AI risks. A broad coalition of former officials, technical experts and public figures has published a detailed…
Get in Touch
Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.
