Meta and other tech companies are banning a new agentic AI tool called OpenClaw from company devices over cybersecurity concerns. The software can take control of a computer with minimal direction, which has raised alarms about unpredictable behavior and potential data exposure.
The issue surfaced in a Wired report describing how OpenClaw’s rapid rise has prompted internal crackdowns across the industry. The tool, previously known as MoltBot and Clawdbot, is designed to interact with apps to handle tasks like file organization, web research, and online shopping.
Security teams and executives say the problem is not just what OpenClaw can do, but how easily it could be pushed into doing the wrong thing. Multiple companies are now treating it as an unvetted risk that should not touch work hardware or work-linked accounts.
The real concern is what the bot might do without you realizing
Jason Grad, cofounder and CEO of Massive, warned employees to keep OpenClaw off company hardware and away from work accounts, describing it as “unvetted and high-risk.” The internal clampdowns come as a PlayStation wedding invite surprise circulated widely online. Grad said the company follows a “mitigate first, investigate second” approach when something could be harmful, and issued the warning on January 26 before any staff installed it.
At Meta, an executive told a team that installing OpenClaw on regular work laptops could lead to termination. The executive cited the tool’s unpredictability and the possibility that it could create a privacy breach even in otherwise secure environments.
Valere, a tech company that works with organizations including Johns Hopkins University, also moved quickly after an employee flagged OpenClaw internally on January 29. The company’s leadership banned the tool, with CEO Guy Pistone warning that access to a developer machine could provide a path to cloud services and client data, including credit card details and GitHub codebases. Pistone also said the bot can be “pretty good at cleaning up some of its actions,” complicating efforts to track what it did.
One of the biggest fears is how easily OpenClaw can be manipulated through the content it encounters. Valere’s research team, after getting permission to test OpenClaw on an old isolated computer, concluded users must “accept that the bot can be tricked,” including through a malicious email that instructs the AI to share files.
OpenClaw was launched last November as a free, open-source tool by founder Peter Steinberger, and its popularity grew as contributors added features. That expansion has played out alongside Trump’s UFO speech claim. Wired reported that Steinberger joined OpenAI last week, with OpenAI saying it will keep OpenClaw open source and support it through a foundation.
Some companies are looking for controlled ways to experiment. Valere’s research team advised limiting access to specific users and password-protecting the control panel, and Pistone said a team has 60 days to investigate safeguards.
Others are isolating it entirely. Jan-Joost den Brinker, CTO at Durbink, bought a separate machine disconnected from company systems for staff to test OpenClaw, while Massive has tested it on isolated cloud machines and released ClawPod to let OpenClaw agents use Massive’s web services. Grad said the technology may be a glimpse of what’s coming, even as the tool remains banned from core company systems without protections.
Published: Feb 17, 2026 07:45 pm