Demo

Amid significant investment and rapid technological growth, the Bace Cybersecurity Institute proposes an AI Working Group to address security and integration challenges in deploying generative AI across industries, highlighting the risks of vulnerabilities, shadow AI, and flawed implementations.

Generative Artificial Intelligence (GenAI) and intelligent agents powered by it hold transformative potential to significantly boost productivity across various sectors. However, the cybersecurity landscape and application development of these technologies remain in early stages, with the industry just starting to grapple with inherent vulnerabilities and integration challenges. According to Dr. Mark Cummings of the Bace Cybersecurity Institute (BCI), accelerating the learning curve demands collaborative frameworks that enable professionals to share experiences and establish best practices. To this end, BCI is exploring the creation of an AI Working Group (AIWG), a platform aiming to unify expertise to tackle both the security and developmental hurdles of GenAI deployment.

Security concerns with GenAI are multifaceted. Early apprehensions focused on how GenAI could enhance the scale and sophistication of cyberattacks. Subsequently, attention shifted to the susceptibility of AI models themselves—especially during training phases, where model corruption can occur unintentionally or through malicious interference. These risks underscore the necessity for stringent controls over training data sources and supply chains. Moreover, prompt injection attacks, where corrupted or malicious data infiltrate the AI’s context window, pose a rising threat. These attacks can manifest through direct user prompts or embedded data, including imperceptible manipulations like invisible text or embedded characters in images, sometimes in multiple languages. This vulnerability is particularly significant as AI-powered intelligent agents gain traction in critical environments such as contact centre automation, but the risk spans far broader, endangering business-to-business and consumer applications, as well as industrial and infrastructure control systems.

From a development perspective, while AI chatbots have achieved widespread acceptance, the evolution towards more autonomous intelligent agents—capable of executing complex tasks—mirrors the leap from basic personal computing to the era of computers actively performing work on users’ behalf. Yet, industry progress is hampered by a steep learning curve. A notable MIT study reveals a striking statistic: 95% of current GenAI pilots within companies fail to deliver meaningful revenue or productivity improvements, indicating that effective application remains elusive despite rapid technological advancements.

The MIT NANDA initiative’s research highlights that success is predominantly seen among nimble startups focusing on highly specialised use cases and leveraging external expertise, achieving up to $20 million in annual revenue. In contrast, large enterprises often falter due to flawed integration of GenAI tools within existing workflows, misaligned budgets prioritising sales and marketing over high-return areas like back-office automation, and a tendency to develop in-house solutions rather than partnering with specialised vendors. These challenges are compounded by the phenomenon of ‘shadow AI’, where employees adopt unapproved AI tools, complicating governance and increasing security risks.

Additional insights reveal that many organisations overspend on cloud infrastructure without adequately addressing associated IT gaps, such as outdated help desk systems—identified by more than half of enterprises as a cybersecurity vulnerability. Cloud-based IT support solutions are emerging as crucial enablers of improved security and efficiency, with reported gains of 42% in IT process effectiveness and 29% in cybersecurity levels, thereby underscoring the importance of modernised, secure infrastructure in underpinning AI initiatives.

Despite substantial investment—reportedly between $30 billion to $40 billion in GenAI—the broader corporate world remains cautious. The integration difficulties are not purely technological but also cultural and organisational. Barriers such as lack of system interoperability, concerns over sensitive data leaks, regulatory compliance, traceability, and limited customisation capabilities slow adoption. The rise of shadow AI demonstrates both the desire for AI tools and corporate hesitation to fully embrace them officially due to security and governance concerns.

In summary, while generative AI promises sweeping productivity gains and transformative changes akin to the personal computing revolution, the journey to realising this potential is fraught with security risks, developmental setbacks, and organisational obstacles. Initiatives like the AI Working Group proposed by the Bace Cybersecurity Institute represent critical steps towards collective knowledge-building and resilience. Meanwhile, the corporate community’s cautious approach and the ongoing evolution of best practices suggest that a more secure, specialised, and integrated future for GenAI applications is on the horizon.

📌 Reference Map:

  • [1] (Pipeline Pub – Cybersecurity GenAI Agentic AI) – Paragraph 1, Paragraph 2, Paragraph 3, Paragraph 4
  • [2] (Pipeline Pub – Cybersecurity GenAI Agentic AI) – Paragraph 1, Paragraph 2
  • [3] (TechRadar – MIT GenAI Pilots) – Paragraph 3, Paragraph 4
  • [4] (Tom’s Hardware – MIT GenAI Study) – Paragraph 3, Paragraph 4
  • [5] (NoHold – MIT GenAI Pilots) – Paragraph 4
  • [6] (BusinessOf.Tech – MIT GenAI Pilots and Cloud) – Paragraph 5
  • [7] (El País – Generative AI Corporate Caution) – Paragraph 6

Source: Noah Wire Services

Noah Fact Check Pro

The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.

Freshness check

Score:
7

Notes:
The narrative presents recent developments in generative AI and intelligent agents within cybersecurity, with references to studies and initiatives from 2025. However, the Bace Cybersecurity Institute (BCI) and its proposed AI Working Group (AIWG) lack verifiable online presence, raising concerns about the authenticity of these claims. Additionally, the article is hosted on Pipeline Publishing, which has previously published similar content, suggesting potential recycling of material. The earliest known publication date of similar content is from 2024. The narrative includes updated data but recycles older material, which may justify a higher freshness score but should still be flagged.

Quotes check

Score:
6

Notes:
The article includes direct quotes attributed to Dr. Mark Cummings of the Bace Cybersecurity Institute. However, no online matches for these quotes were found, raising questions about their authenticity. The lack of verifiable sources for these quotes suggests potential fabrication or misattribution.

Source reliability

Score:
4

Notes:
The narrative originates from Pipeline Publishing, which has previously published similar content, suggesting potential recycling of material. The Bace Cybersecurity Institute and its proposed AI Working Group lack verifiable online presence, raising concerns about the authenticity of these claims. The absence of verifiable sources for the quotes attributed to Dr. Mark Cummings further diminishes the reliability of the source.

Plausability check

Score:
5

Notes:
The narrative discusses the challenges and developments in generative AI and intelligent agents within cybersecurity, referencing studies and initiatives from 2025. However, the lack of verifiable sources for key claims, such as the existence of the Bace Cybersecurity Institute and its proposed AI Working Group, raises questions about the plausibility of the information presented. The absence of supporting details from other reputable outlets further diminishes the credibility of the claims.

Overall assessment

Verdict (FAIL, OPEN, PASS): FAIL

Confidence (LOW, MEDIUM, HIGH): HIGH

Summary:
The narrative presents recent developments in generative AI and intelligent agents within cybersecurity, referencing studies and initiatives from 2025. However, the Bace Cybersecurity Institute and its proposed AI Working Group lack verifiable online presence, raising concerns about the authenticity of these claims. The article is hosted on Pipeline Publishing, which has previously published similar content, suggesting potential recycling of material. The lack of verifiable sources for the quotes attributed to Dr. Mark Cummings further diminishes the reliability of the source. The absence of supporting details from other reputable outlets and the lack of verifiable sources for key claims raise significant questions about the plausibility and credibility of the information presented.

Supercharge Your Content Strategy

Feel free to test this content on your social media sites to see whether it works for your community.

Get a personalized demo from Engage365 today.

Share.

Get in Touch

Looking for tailored content like this?
Whether you’re targeting a local audience or scaling content production with AI, our team can deliver high-quality, automated news and articles designed to match your goals. Get in touch to explore how we can help.

Or schedule a meeting here.

© 2026 Engage365. All Rights Reserved.