OpenAI introduces Lockdown Mode for ChatGPT, a security setting designed to limit data exfiltration and mitigate prompt injection risks, targeting organisations with heightened threat levels amid ongoing security challenges.
OpenAI has rolled out an optional security setting for ChatGPT called Lockdown Mode, a configuration intended to reduce the risk that highly exposed users will have sensitive information exfiltrated via malicious prompts or network-connected workflows. According to OpenAI, the mode is aimed at people and teams who face heightened threats, such as senior executives and security personnel, and is offered across enterprise and sector-specific deployments. (Sources: OpenAI announcement, help centre).
When activated by workspace administrators through role-based controls, Lockdown Mode curtails ChatGPT’s ability to make outbound network requests and access live internet content. OpenAI says web browsing is limited to cached material, and features that normally rely on external connectivity , including image generation outputs, Deep Research, Agent Mode, Canvas networking and automated file downloads , are turned off, although users may still open files they upload manually. The company cautions that Lockdown Mode does not stop malicious instructions from appearing inside content but does prevent those instructions from triggering network actions that could leak data. (Sources: OpenAI announcement, product help article).
Administrators retain control over connected third-party apps rather than Lockdown Mode automatically severing all app integrations; OpenAI has described a risk-based approach to app actions and warns that operations which produce visible, write-style outcomes typically carry greater exposure. As part of the broader rollout, the company is adding “Elevated Risk” labels inside ChatGPT to flag features or workflows that require extra scrutiny , for example, capabilities that grant network access to developer tooling , and said those labels will be updated as mitigations evolve. (Sources: OpenAI announcement, product announcement on elevated risk labels).
The introduction of Lockdown Mode sits within a broader OpenAI programme to defend against prompt injection attacks. The firm describes ongoing red-teaming, bug-bounty work and engineered controls such as sandboxing, confirmations before consequential actions and agent modes designed to limit unintended behaviour. OpenAI has emphasised user education and operational best practice as complementary measures for organisations that must balance productivity with risk reduction. (Sources: OpenAI safety pages, hardening Atlas write-up).
Not everyone believes the measures remove all danger. Independent researchers have already demonstrated severe vulnerabilities in agentic browsing tools that process external web content, with reports that certain implementations can allow malicious payloads to persist in memory or enable phishing-style exploits. IT Pro summarised research showing that weaknesses in an agentic browser variant can lead to remote code execution and cross-session persistence, underscoring why some organisations will prefer conservative configurations such as Lockdown Mode while mitigations mature. (Sources: IT Pro coverage of security research, OpenAI hardening efforts).
OpenAI says the feature will reach consumer-facing versions of ChatGPT in coming months while the company continues to refine risk labels and protective controls. For organisations evaluating the setting, OpenAI’s guidance recommends combining Lockdown Mode with administrative app restrictions, careful review of agent actions and established incident-response processes to reduce the chance that prompt-based attacks produce harmful outcomes. (Sources: OpenAI announcement, help centre, safety guidance).
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
10
Notes:
The article was published on 16 February 2026, which is within the past week, indicating high freshness. The content appears original, with no evidence of being recycled or republished from other sources. The narrative is based on OpenAI’s recent announcement, which is a primary source, enhancing its originality. No discrepancies in figures, dates, or quotes were found.
Quotes check
Score:
10
Notes:
The article includes direct quotes from OpenAI’s official announcement and help centre, which are verifiable and directly sourced from OpenAI’s publications. No variations or inconsistencies in the quotes were found, confirming their authenticity.
Source reliability
Score:
8
Notes:
The primary sources are OpenAI’s official website and help centre, which are reputable and authoritative. However, the article also references IT Pro, a third-party publication, which may introduce potential biases or inaccuracies. The reliance on a single third-party source slightly reduces the overall reliability.
Plausibility check
Score:
9
Notes:
The claims about OpenAI introducing ‘Lockdown Mode’ in ChatGPT align with known industry trends towards enhancing AI security. The article provides specific details about the features and intended user base, which are consistent with OpenAI’s known initiatives. No implausible claims were identified.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The article is recent, original, and based on verifiable quotes from reputable sources. The claims are plausible and supported by accessible sources without paywalls. The content type is appropriate, and the verification sources are mostly independent, with a minor reliance on a third-party publication. Overall, the article meets the verification standards with high confidence.
