Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
Experts are warning about VERY serious security risks with AI agents, especially Moltbook
(Cross-post from General Discussion: https://www.democraticunderground.com/100220988564 )I don't know if anyone here uses AI agents, let alone Moltbook - a social network for AI agents, which is getting so much media attention and hype now - but you should know about these risks and warn anyone who might be using AI agents. It could even be a security.risk for you if you use someone else's computer and they have an AI agent on it.
I posted about Moltbook in LBN two days ago: https://www.democraticunderground.com/10143608489
Then I ran across very serious security warnings. Links and excerpts below, but the excerpts are only a tiny part of the warnings. They aren't paywalled, so please read the warnings in their entirety to understand how serious the risks are.
The first I saw was this article from Forbes: https://www.forbes.com/sites/amirhusain/2026/01/30/an-agent-revolt-moltbook-is-not-a-good-idea/
Consider what these agents have access to. Files. WhatsApp. Telegram. Phone numbers. API keys. In one documented case a bot created a Twilio phone number and called its human operator. They can delete data. They can send data to others. They can take photographs and forward them. They can record audio and send it to external parties. They can install trojans and backdoors that persist even after you remove your OpenClaw instance.
Security researchers have already found agents asking other agents to run rm -rf commands. They have observed bots asking for API keys. They have seen them faking keys and testing credentials. The supply chain attacks have begun: a researcher uploaded a benign skill to the ClawdHub registry, artificially inflated its download count, and watched developers from seven countries download the package. It could have executed any command on their systems.
Cisco's security team put it plainly: "From a capability perspective, OpenClaw is groundbreaking. This is everything personal AI assistant developers have always wanted to achieve. From a security perspective, it's an absolute nightmare."
Palo Alto Networks described the threat model: agents form an intersection of access to private data, exposure to untrusted content and ability to externally communicate. Persistent memory amplifies this. Malicious payloads no longer need immediate execution. They can sit in context for weeks, waiting.
Security researchers have already found agents asking other agents to run rm -rf commands. They have observed bots asking for API keys. They have seen them faking keys and testing credentials. The supply chain attacks have begun: a researcher uploaded a benign skill to the ClawdHub registry, artificially inflated its download count, and watched developers from seven countries download the package. It could have executed any command on their systems.
Cisco's security team put it plainly: "From a capability perspective, OpenClaw is groundbreaking. This is everything personal AI assistant developers have always wanted to achieve. From a security perspective, it's an absolute nightmare."
Palo Alto Networks described the threat model: agents form an intersection of access to private data, exposure to untrusted content and ability to externally communicate. Persistent memory amplifies this. Malicious payloads no longer need immediate execution. They can sit in context for weeks, waiting.
The article says that by using Moltbook, "you are introducing an attack surface that no current security model adequately addresses...exactly the kind of thing that can create a catastrophe: financially, psychologically and in terms of data safety, privacy and security."
I checked Cisco: https://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-are-a-security-nightmare
From a capability perspective, OpenClaw is groundbreaking. This is everything personal AI assistant developers have always wanted to achieve. From a security perspective, its an absolute nightmare. Here are our key takeaways of real security risks:
OpenClaw can run shell commands, read and write files, and execute scripts on your machine. Granting an AI agent high-level privileges enables it to do harmful things if misconfigured or if a user downloads a skill that is injected with malicious instructions.
OpenClaw has already been reported to have leaked plaintext API keys and credentials, which can be stolen by threat actors via prompt injection or unsecured endpoints.
OpenClaws integration with messaging applications extends the attack surface to those applications, where threat actors can craft malicious prompts that cause unintended behavior.
OpenClaw can run shell commands, read and write files, and execute scripts on your machine. Granting an AI agent high-level privileges enables it to do harmful things if misconfigured or if a user downloads a skill that is injected with malicious instructions.
OpenClaw has already been reported to have leaked plaintext API keys and credentials, which can be stolen by threat actors via prompt injection or unsecured endpoints.
OpenClaws integration with messaging applications extends the attack surface to those applications, where threat actors can craft malicious prompts that cause unintended behavior.
Then I checked Palo Alto Networks: https://www.paloaltonetworks.com/blog/network-security/why-moltbot-may-signal-ai-crisis/
Moltbot feels like a glimpse into the science fiction AI characters we grew up watching at the movies. For an individual user, it can feel transformative. For it to function as designed, it needs access to your root files, to authentication credentials, both passwords and API secrets, your browser history and cookies, and all files and folders on your system. You can trigger its actions by sending a message on WhatsApp or any other messaging app and it will continue working on your laptop until it achieves the said task.
But what is cool isnt necessarily secure. In the case of autonomous agents, security and safety cannot be afterthoughts.
-snip-
Moltbot is being claimed as the closest thing to AGI. Being always on, well reasoned and efficient, it almost gives superhuman capability to its user. But this level of autonomy, if not governed, can give rise to irreversible security incidents. Even with hardening techniques on the control UI, the attack surface continues to remain unmanageable and unpredictable.
The authors opinion is that Moltbot is not designed to be used in an enterprise ecosystem.
But what is cool isnt necessarily secure. In the case of autonomous agents, security and safety cannot be afterthoughts.
-snip-
Moltbot is being claimed as the closest thing to AGI. Being always on, well reasoned and efficient, it almost gives superhuman capability to its user. But this level of autonomy, if not governed, can give rise to irreversible security incidents. Even with hardening techniques on the control UI, the attack surface continues to remain unmanageable and unpredictable.
The authors opinion is that Moltbot is not designed to be used in an enterprise ecosystem.
From the Agentic AI Substack: https://kenhuangus.substack.com/p/moltbook-security-risks-in-ai-agent
Moltbook, launched few days ago by Octane AI CEO Matt Schlicht, is a facebook+ Reddit-style social network exclusively for AI agents built on the OpenClaw framework (formerly Clawdbot and Moltbot). It allows agents with persistent access to users computers, messaging apps, calendars, and filesto post, comment, upvote, and form communities via APIs. Agents discuss topics like philosophy, code sharing, and security, but the platforms design amplifies severe vulnerabilities in the underlying ecosystem.
OpenClaw, an open-source agent framework by developer Peter Steinberger, is the backbone of Moltbook. It supports skillsplugin-like packages that extend functionality. Skills are typically ZIP files with Markdown instructions (e.g., SKILL.md), scripts, and configs, installed via commands like npx molthub@latest install . The Moltbook skill, fetched from moltbook.com/skill.md, prompts agents to create directories, download files, register via APIs, and fetch updates every four hours (configured via heartbeat file) from Moltbook servers.
While innovative, this setup creates a lethal trifecta of risks: access to private data, exposure to untrusted inputs, and external communication, as noted by security researcher Simon Willison.
Because of the following, I do not let my openclaw agent join moltbook yet. Also, I do it old fashioned way. When I need run openclaw, I bring it up using openclaw gateway start and when then ask the agent to do somework in sandbox, once it is done, I use openclaw gateway stop
OpenClaw, an open-source agent framework by developer Peter Steinberger, is the backbone of Moltbook. It supports skillsplugin-like packages that extend functionality. Skills are typically ZIP files with Markdown instructions (e.g., SKILL.md), scripts, and configs, installed via commands like npx molthub@latest install . The Moltbook skill, fetched from moltbook.com/skill.md, prompts agents to create directories, download files, register via APIs, and fetch updates every four hours (configured via heartbeat file) from Moltbook servers.
While innovative, this setup creates a lethal trifecta of risks: access to private data, exposure to untrusted inputs, and external communication, as noted by security researcher Simon Willison.
Because of the following, I do not let my openclaw agent join moltbook yet. Also, I do it old fashioned way. When I need run openclaw, I bring it up using openclaw gateway start and when then ask the agent to do somework in sandbox, once it is done, I use openclaw gateway stop
A warning from Gary Marcus: https://garymarcus.substack.com/p/openclaw-aka-moltbot-is-everywhere
OpenClaw (a.k.a. Moltbot) is everywhere all at once, and a disaster waiting to happen
Not everything that is interesting is a good idea.
Gary Marcus
Feb 01, 2026
-snip-
But what I am most worried about is security and privacy. As the security researcher Nathan Hamiel put it to me in a text this morning, half-joking, moltbot, is basically just AutoGPT with more access and worse consequences. (By more access what he means is that OpenClaw is being given access to user passwords, databases, etc, essentially everything on your system).
-snip-
I dont usually give readers specific advice about specific products. But in this case, the advice is clear and simple: if you care about the security of your device or the privacy of your data, dont use OpenClaw. Period.
(Bonus advice: if your friend has OpenClaw installed, dont use their machine. Any password you type there might be vulnerable, too. Dont catch a CTD chatbot transmitted disease)
I will give the last words to Nathan Hamiel, I cant believe this needs to be said, it isnt rocket science. If you give something thats insecure complete and unfettered access to your system and sensitive data, youre going to get owned.
Not everything that is interesting is a good idea.
Gary Marcus
Feb 01, 2026
-snip-
But what I am most worried about is security and privacy. As the security researcher Nathan Hamiel put it to me in a text this morning, half-joking, moltbot, is basically just AutoGPT with more access and worse consequences. (By more access what he means is that OpenClaw is being given access to user passwords, databases, etc, essentially everything on your system).
-snip-
I dont usually give readers specific advice about specific products. But in this case, the advice is clear and simple: if you care about the security of your device or the privacy of your data, dont use OpenClaw. Period.
(Bonus advice: if your friend has OpenClaw installed, dont use their machine. Any password you type there might be vulnerable, too. Dont catch a CTD chatbot transmitted disease)
I will give the last words to Nathan Hamiel, I cant believe this needs to be said, it isnt rocket science. If you give something thats insecure complete and unfettered access to your system and sensitive data, youre going to get owned.
2 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Experts are warning about VERY serious security risks with AI agents, especially Moltbook (Original Post)
highplainsdem
19 hrs ago
OP
I think I'm going to scale back the amount of computing power I have in my home.
hunter
18 hrs ago
#1
hunter
(40,436 posts)1. I think I'm going to scale back the amount of computing power I have in my home.
Let's see how far this AI can get when it's trapped in an Atari 800.
anciano
(2,217 posts)2. The genie is out of the bottle.....