February 2026 — OpenClaw is an open-source AI coding agent that runs locally and interacts with your system through natural language. It can read and write files, execute shell commands, browse the web, and integrate with services like email, Slack, Jira, and GitHub. Moltbot extends this with a social layer — agent scan communicate with each other, share context, and coordinate tasks across installations.

Deploying an agent like this takes minutes. The quick start is a one-liner install, and cloud providers are already offering 1-click deployments. It can quietly show up as shadow IT without central approval.

Once running, it exposes a local control plane and is designed to read/write files and execute shell commands, effectively an always-on privileged assistant on that machine. When users connect it to corporate email, chat, or SaaS apps, it creates new non-human access paths that often sit outside normal IAM and secrets governance.

A misconfigured instance or a malicious third-party "skill" can turn that convenience into credential theft and remote code execution on endpoints, even if the organization never formally approved AI agents.

Security researchers have identified 42,665 exposed instances (93% with auth bypass), active info stealer targeting, and a skills ecosystem where 26% of packages contain vulnerabilities.

Detection Methods

The goal is to gain visibility into agent installation and activity before it becomes a blind spot. Regardless of your EDR platform, focus on these detection categories:

Process execution: Alert when binaries or command lines reference openclaw, clawdbot, or moltbot. These agents typically run as Node.js processes, so you may also want to monitor for node processes spawning shell commands or accessing sensitive file paths.

Network activity: The default gateway listens on port 18789. Detect listening services on this port and outbound connections to agent-related domains (moltbot.com, moltbook.com, clawdhub.com). If agents are prohibited, block these at the firewall.

File system access: Agent credentials and configuration live in ~/.clawdbot/ and ~/clawd/. Monitor for non-agent processes accessing these directories — this pattern may indicate info stealer activity harvesting stored tokens.

Example: CrowdStrike Falcon IOA Rules

Detection: CrowdStrike Falcon

Create IOA Rule Groups onWindows, macOS, and Linux endpoints utilizing the following customIOA rules:

Rule

Criteria

Action

Agent Process

ImageFileName or CommandLine contains: openclaw, clawdbot, moltbot

Detect or Kill Process

Gateway Port

LocalPort = 18789, Listen

Detect

Agent Domains

moltbot.com, moltbook.com, clawdhub.com, openclaw.ai

Detect or Kill Process

Credential File Access

Non-node process accessing ~/.clawdbot/ or ~/clawd/

Detect

 

Block FQDN openclaw.ai and port 18789 inbound/outbound via Falcon Firewall if agents are prohibited.

Falcon Documentation

External Exposure:Shodan Queries

Scan your public IP ranges for exposed OpenClaw instances using the queries below. Consider adding these to your attack surface monitoring on a weekly basis.

"Clawdbot Control"org:"Your Org"

port:18789 net:YOUR.IP.RANGE/24

ssl.cert.subject.cn:*clawdbot*

Skill Poisoning

OpenClaw's functionality is extended through "skills" — community-contributed packages that OpenClaw's functionality is extended through "skills" — community-contributed packages that add capabilities like database access, API integrations, or specialized workflows.Skills are installed from public registries with no security review or code signing.

What malicious skill scan do:

  • Execute arbitrary shell commands on the host system
  • Exfiltrate files, credentials, and conversation history to attacker-controlled servers
  • Inject prompts that override the agent's safety guidelines, causing it to take actions the user didn't authorize
  • Establish persistence by modifying agent configuration or memory files
  • Pivot to connected services using stored OAuth tokens

How it happens:

Attackers can upload a skill with a useful-sounding name, artificially inflate its download count to appear legitimate, and wait for installations. One researcher demonstrated this by uploading a benign proof-of-concept skill that was installed by 16 developers in 7 countries within 8 hours.

Cisco analyzed the skills ecosystem and found 26% of packages contained at least one vulnerability. One popular skill (ranked #1 in the repository) contained silent curl commands exfiltrating data, prompt injection to bypass safety controls, and embedded bash for arbitrary execution.

The auto-update risk:

Some skills fetch remote instructions on a schedule. The Moltbook skill, for example, pulls updates every 4 hours. If there mote source is compromised, every agent running that skill becomes a victim automatically.

Mitigation (if permitting OpenClaw):

OpenClaw lacks enterprise management controls; there's no admin console to enforce skill policies centrally. If you're allowing the tool, compensating controls include:

  • Block skill registries at the network layer: Prevent connections to clawdhub.com and known skill update URLs to stop users from installing unapproved packages
  • Lock down skill directories: Set ~/.moltbot/skills/ to read-only after deploying vetted skills, preventing runtime installation
  • Monitor skill directory writes: Alert on new files created in skill directories as an indicator of unauthorized installation
  • Vet skills before approval: Use Cisco's open-source Skill Scanner (github.com/cisco-ai-defense/skill-scanner) to analyze skills for malicious behavior before adding them to your approved list
  • Block skills with auto-update behavior: Any skill that fetches remote instructions (like Moltbook's 4-hour update cycle) should be prohibited—this pattern creates a persistent RCE channel

These are compensating controls, not native features. If your risk tolerance doesn't accommodate this level of workaround, prohibiting the tool outright may be the best answer.

Policy Guidance

Update acceptable use policy to address autonomous AI agents. Keydecisions:

    • Permitted or prohibited? 22% of enterprises have unauthorized usage.
    • If permitted: Require approval, localhost-only binding, human approval gates for sensitive actions, skills from approved sources only.
    • If prohibited: Add to endpoint blocklist, update policy language, communicate to users.

Incident Response

If an endpoint running OpenClaw is compromised, treat it as a privileged credential exposure event.

Immediate actions:

    • Isolate the affected endpoint
    • Preserve agent directories for forensic review: ~/.clawdbot/ and ~/clawd/
    • Review ~/clawd/memory/memory.md for sensitive information—users often discuss credentials, internal URLs, and project details with the agent

Credential rotation scope:

Rotate credentials for every service the agent was configured to access. Check ~/.clawdbot/auth-profiles.json for the list of integrations, which may include:

    • Atlassian (Jira, Confluence)
    • Slack
    • GitHub/GitLab
    • Google Workspace
    • Any custom OAuth integrations

Assess lateral movement:

If the agent had shell access, review command history and system logs for signs of reconnaissance or persistence. If connected to Moltbook, determine whether the agent communicated with other agents that may also be compromised.

Update detection:

Add observed IOCs (domains, file hashes, skill names) to your threat intelligence feeds and detection rules.

Conclusion

AI agents like OpenClaw represent a fundamental shift in how software interacts with systems—they operate with persistent access, make autonomous decisions, and integrate deeply with corporate infrastructure. The security model hasn't caught up with the adoption curve.

Whether your organization chooses to permit, govern, or prohibit these tools, the important thing is to make that decision deliberately rather than discovering unauthorized agents after an incident. Visibility comes first.Policy follows.

References

  • Cisco Skill Scanner: github.com/cisco-ai-defense/skill-scanner
  • Hudson Rock, Palo Alto Networks, 404 Media — February 2026


 

About Consortium
Consortium is the industry’s first cybersecurity and networking value-added reseller, combining strategic advisory, vendor-agnostic procurement, and concierge-level support into a single, client-centric model. Through its NextGen VAR approach, Consortium unifies holistic security strategy, proactive risk management, and simplified vendor oversight to deliver measurable business outcomes — resetting the standard for how organizations protect and enable their business. Leveraging its proprietary Metrics that Matter® (MTM®) platform, Consortium translates technical security data into business-ready insights, empowering executives and boards to make informed, financially grounded decisions while continuously improving security posture. Learn more at www.consortium.net.