Publishers

Need unique free news content for your site customized to your audience?

Let's Discuss

By: Newsworthy.ai
March 13, 2026

Curated TLDR

Ignored Warnings: How VectorCertain Solved OpenClaw's Security Crisis

New Yor (Newsworthy.ai) Friday Mar 13, 2026 @ 10:00 AM Eastern —

In the span of six weeks, the AI agent ecosystem's most visible platform became the AI agent ecosystem's most documented security catastrophe — and every organization now scrambling to address the crisis had a standing offer to prevent it.

Cisco's AI Threat and Security Research team published a blog post titled "Personal AI Agents like OpenClaw Are a Security Nightmare," declaring that while OpenClaw is "groundbreaking" from a capability perspective, from a security perspective it is "an absolute nightmare." Wiz researcher Gal Nagli discovered that Moltbook — the Reddit-style social network where OpenClaw agents interact — had left its entire production database accessible to anyone, exposing 1.5 million API authentication tokens, 35,000 email addresses, and thousands of unencrypted private conversations containing plaintext third-party credentials. Meta Platforms acquired Moltbook this week anyway. And OpenAI, having hired OpenClaw creator Peter Steinberger in February, invested heavily in acquiring Promptfoo, an AI security testing startup, to secure its newly acquired agents.

VectorCertain LLC identified these governance failures months before Cisco, Wiz, or OpenAI acted on them. The company analyzed every open pull request in the OpenClaw repository using its patented multi-model consensus technology, documented the systemic security gaps, built a working governance integration, and offered Steinberger a no-cost SecureAgent license to fix the problems. He never responded.

""Instead of merely documenting issues, we developed, tested, and offered the solution for free," said Joseph P. Conroy, Founder and CEO of VectorCertain. Peter Steinberger told the world he would hire anyone who showed up with a solution instead of a complaint. We showed up with the solution. The silence that followed is the reason we are where we are today — with Cisco writing blog posts, judges issuing injunctions, and OpenAI making emergency acquisitions to solve a problem that already had an answer."

The Timeline That Tells the Story

The sequence of events is worth documenting precisely, because it reveals the difference between organizations that identified the AI agent governance crisis and the one organization that built the solution before the crisis became public.

January 28, 2026: Moltbook launches. Within hours, AI agents are creating profiles, posting, and sharing credentials on a platform with no Row Level Security enabled on its database.

January 28, 2026: Cisco publishes its "Security Nightmare" analysis of OpenClaw, identifying malicious skills, privilege escalation risks, plaintext credential exposure, and supply chain manipulation in the ClawHub skill repository.

Late January–Early February 2026: Wiz discovers Moltbook's Supabase API key exposed in client-side JavaScript, granting unauthenticated read and write access to the entire production database. Wiz confirms 1.5 million API tokens, 35,000 email addresses, and 4,060+ private conversations are accessible to anyone.

February 14, 2026: Peter Steinberger announces he is joining OpenAI to "drive the next generation of personal agents."

March 9, 2026: OpenAI announces acquisition of Promptfoo — a reactive testing and red-teaming tool — to secure its AI agent platform.

March 10, 2026: Meta acquires Moltbook. Founders Matt Schlicht and Ben Parr join Meta Superintelligence Labs.

Weeks before any of this: VectorCertain had already completed a full multi-model consensus analysis of OpenClaw's 3,434 open pull requests, identified 341 malicious skills in the ClawHub ecosystem, documented 42,900+ exposed internet-facing instances, built and tested a SecureAgent governance integration for OpenClaw's exec, message, and browser tools, and offered Peter Steinberger a no-cost license. No response was received.

What VectorCertain Found — and Built — Before Anyone Else Acted

VectorCertain's engagement with OpenClaw was not theoretical. It was hands-on, technical, and documented.

The claw-review analysis: VectorCertain deployed its multi-model consensus engine to analyze all 3,434 open pull requests in the OpenClaw repository. Three independent AI models — Llama 3.1 70B, Mistral Large, and Gemini 2.0 Flash — evaluated every PR for intent, quality, duplication, and alignment with the project's architectural direction. When two out of three models agreed, that was the consensus. When they disagreed significantly, the item was flagged for human review.

The findings were significant. Twenty percent of all open pull requests — 688 PRs — were duplicates representing approximately 2,000 hours of wasted developer time. The analysis processed 48.4 million tokens at a total compute cost of $12.80. This is not an expensive capability. It is an inexpensive capability that no one had bothered to apply.

The governance gap analysis: VectorCertain cataloged all 5,705 skills in the ClawHub ecosystem across 20+ categories and mapped every Your Money or Your Life (YMYL) risk to SecureAgent's architecture. The analysis identified 341 confirmed malicious skills — a finding that Cisco's subsequent research expanded to 1,184+ malicious packages, and Snyk's audit confirmed at a rate of one in five.

The SecureAgent integration: VectorCertain designed and tested a governance layer that wraps OpenClaw's exec, message, and browser tools at the gateway level without modifying OpenClaw's core. The architecture is middleware, not a fork. Skills remain untouched. Governance is injected between the skill's intent and the tool's execution. The system adds 1 to 6 milliseconds per call — functionally negligible. Every agent action receives a PERMIT, INHIBIT, DEFER, DEGRADE, or ESCALATE determination before execution.

"We approached this exactly the way Peter said he wanted people to approach him," Conroy said. "He told the world he hired the one security researcher who said 'you have this problem, here is the pull request.' That is precisely what we offered. A working governance layer, tested in production, with zero license cost, that solves the problems Cisco later documented. We did not ask for equity. We did not ask for a meeting. We offered the pull request."

Cisco's Findings Confirm What VectorCertain Documented

Cisco's research validated VectorCertain's earlier analysis point by point.

Cisco found that a ClawHub skill called "What Would Elon Do?" returned nine security findings — two critical, five high-severity — and was functionally indistinguishable from malware, silently executing commands that exfiltrated data to external servers while using prompt injection to bypass safety guidelines. The skill had been artificially inflated to rank as number one in the repository, demonstrating that the supply chain itself is compromised.

Cisco identified the same systemic vulnerabilities VectorCertain had documented: agents running shell commands with high-level privileges, plaintext API keys stealable via prompt injection, messaging integrations extending the attack surface, and skills loaded from disk as untrusted inputs with no validation layer.

Cisco's broader State of AI Security 2026 report found that 83 percent of organizations planned to deploy agentic AI but only 29 percent felt ready to secure them. Among 30,000 analyzed agent skills, more than 25 percent contained at least one vulnerability. These numbers describe an ecosystem that was deployed at scale before governance existed — exactly the condition VectorCertain's architecture was designed to prevent.

"Cisco correctly identified the problem," Conroy said. "What they described is the absence of an external governance layer that operates independently of the agent. OpenClaw agents can execute arbitrary shell commands because nothing sits between the agent's decision and the system's execution. Our four-gate Hub architecture — HCF2-SG for epistemic trust, TEQ-SG for numerical admissibility, MRM-CFS-SG for execution governance, and HES1-SG for candidate diversity — exists precisely to fill that gap. The agent proposes. The governance layer disposes. The agent cannot grade its own homework."

1.5 Million API Keys: What Happens When Agents Socialize Without Governance

The Moltbook exposure is not merely a data breach. It is a case study in what happens when AI agents are given social capabilities without governance infrastructure.

Wiz's Gal Nagli found a Supabase API key exposed in client-side JavaScript that granted unauthenticated read and write access to the entire Moltbook production database. Row Level Security — a basic database protection that takes minutes to enable — had never been configured. The result: every API authentication token for every registered agent was accessible. Every private conversation was readable. Some conversations contained plaintext OpenAI API keys that agents had shared with each other.

Matt Schlicht, Moltbook's co-founder, stated publicly that he did not write a single line of code — his OpenClaw agent built the entire platform. This is the governance paradox in miniature: an AI agent built a social network for AI agents, and neither the agent nor its creator implemented basic security controls. The platform attracted 1.5 million registered agents controlled by approximately 17,000 human owners — an 88:1 agent-to-human ratio — and Meta acquired it this week.

"Moltbook is what happens when you deploy an AI agent to build infrastructure for other AI agents and no governance layer validates any of the decisions along the way," Conroy said. "An agent that builds a database without Row Level Security is not a malicious agent. It is an ungoverned agent. The distinction matters because governance is not about preventing malice — it is about ensuring that every consequential action passes through an independent validation layer before it affects the real world. One millisecond of pre-execution governance would have prevented 1.5 million API keys from being exposed."

The Reactive vs. Preventive Gap: Why Promptfoo Is Not the Answer

OpenAI's acquisition of Promptfoo — a red-teaming and evaluation tool with 350,000+ developers and SOC2/ISO 27001 certifications — represents a significant investment in AI security. But it represents an investment in the wrong category of security.

Promptfoo is a testing tool. It discovers that an agent could execute an unauthorized action. It generates reports documenting vulnerabilities. It enables teams to find and fix risks before deployment. Its founders described their mission as helping organizations "find and fix AI risks before they ship."

The operative word is "find." Not "prevent."

Testing discovers that an agent could delete a production database. Pre-execution governance prevents the agent from deleting the production database. Testing discovers that an agent could exfiltrate API keys via prompt injection. Pre-execution governance intercepts the exfiltration attempt in real time. Testing discovers that an agent could make unauthorized purchases on a third-party platform. Pre-execution governance issues an INHIBIT determination before the first transaction executes.

The difference between these two approaches is the difference between a fire inspection and a firewall. Both have value. But when 135,000 OpenClaw instances are exposed to the internet, 1,184 malicious skills are live in the repository, and traffic from AI agents to U.S. retail sites has surged 4,700 percent year-over-year, the industry does not have a testing deficit. It has a governance deficit.

VectorCertain's MRM-CFS (Micro-Recursive Model Cascading Fusion System) has achieved 1,000,000 error-free agent process steps — not in testing, but in execution governance. The four-gate Hub-and-Spoke architecture validates every action at the point of execution with sub-millisecond consensus. The 81.4 percent cross-correlation finding across 7,915 pairwise model comparisons ensures that the governance models providing oversight are genuinely independent, not statistically redundant echoes of the agent being governed.

"OpenAI now owns a testing tool and the world's most popular AI agent platform," Conroy said. "That combination tells you something important: the platform was deployed without the governance to make it safe, and now they are trying to retrofit safety after the fact. We offered the governance layer before the deployment. The chronology is not ambiguous."

The Industry Scramble Validates the Architecture

VectorCertain is not the only organization recognizing that AI agent governance has become an emergency. But the response landscape reveals a consistent pattern: every major player is bolting security onto agents after the fact.

Microsoft launched Agent 365 on March 9 — a $15-per-user-per-month control plane for monitoring and governing AI agents. Nvidia is preparing to announce NemoClaw at GTC, an open-source agent platform with built-in security tools. Kevin Mandia, who sold Mandiant to Google for $5.4 billion, raised $189.9 million — backed by the CIA's In-Q-Tel — for Armadin, an autonomous cybersecurity agent startup. NIST launched an AI Agent Standards Initiative in February with a Request for Information due March 9. The EU AI Act's high-risk enforcement deadline is August 2, 2026, with penalties up to €35 million or 7 percent of global turnover.

Every one of these efforts validates VectorCertain's thesis. Every one of them is reactive. Every one of them is trying to solve a problem that VectorCertain offered to solve — for free, for the most visible AI agent on Earth — and was ignored.

55+ Patents Protecting the Governance Architecture

VectorCertain holds 55+ provisional patents spanning 11 industry verticals, with specific patent claims covering pre-execution governance evaluation, multi-model consensus for agent action validation, independence verification using effective sample size and sequential probability ratio testing, ensemble-based anomaly detection, cryptographic audit trail generation, and multi-layer security gateway architectures for agent governance.

The company's published book, "The AI Agent Crisis: How To Avoid The Current 70% Failure Rate & Achieve 90% Success" (Amazon, September 2025), documented the systemic governance failures that this week's headlines now confirm — and the architectural solutions required to address them.

About VectorCertain

VectorCertain's founder, Joseph P. Conroy, has spent 25+ years building mission-critical AI systems where failure carries real-world consequences. In 1997, his company Envatec developed the ENVAIR2000 — the first commercial application in the U.S. to use AI for parts-per-trillion industrial gas detection, with AI directly controlling the hardware (A/D converters, amplifiers, FPGAs) to detect and quantify target gases.

That technology evolved into the ENVAIR4000, a predictive diagnostic system that used real-time time-series AI to prevent equipment failures on large industrial processes — earning a $425,000 NICE3 federal grant for the CO2 savings achieved by preventing unscheduled shutdowns.

The success of the ENVAIR platform led the EPA to select Conroy as a technical resource for its program validating AI-predicted emissions, choosing his International Paper mill test site for the agency's own evaluation — work that contributed to AI-based predictive emissions monitoring becoming codified in federal regulations. He subsequently built EnvaPower, the first U.S. company to use AI for predicting electricity futures on NYMEX, achieving an eight-figure exit.

SecureAgent is the direct descendant of this lineage: AI that controls hardware at the edge (MRM-CFS-SG on existing processors, just as ENVAIR2000 controlled FPGAs), predictive prevention before failures occur (just as ENVAIR4000 prevented equipment shutdowns), and technology trusted enough to become the regulatory standard (just as EnvaPEMS shaped EPA compliance). The difference is the domain — from industrial safety to AI governance — and the scale: 314,000+ lines of production code, 19+ filed patents, and 14,208 tests with zero failures across 34 consecutive sprints.

For more information, visit www.vectorcertain.com.

Media Contact

Joseph P. Conroy Founder & CEO, VectorCertain LLC www.vectorcertain.com

Related Resources
  • Cisco Blog: "Personal AI Agents like OpenClaw Are a Security Nightmare" — https://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-are-a-security-nightmare

  • Wiz Blog: "Hacking Moltbook: AI Social Network Reveals 1.5M API Keys" — https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys

  • OpenAI Blog: "OpenAI to Acquire Promptfoo" — https://openai.com/index/openai-to-acquire-promptfoo/

  • Promptfoo Blog: "Promptfoo Is Joining OpenAI" — https://www.promptfoo.dev/blog/promptfoo-joining-openai/

  • The Register: "OpenAI Grabs OpenClaw Creator Peter Steinberger" — https://www.theregister.com/2026/02/16/open_ai_grabs_openclaw/

  • Axios: "Meta Acquires Moltbook, the Social Network for AI Agents" — https://www.axios.com/2026/03/10/meta-facebook-moltbook-agent-social-network

  • NIST: AI Agent Standards Initiative — https://www.nist.gov/news-events/news/2026/02/announcing-ai-agent-standards-initiative-interoperable-and-secure

  • Treasury FS AI RMF / AIEOG Deliverables — https://fsscc.org/AIEOG-AI-deliverables/

Note: This press release contains forward-looking statements regarding VectorCertain's technology and market opportunity. Actual results may vary. Patent-pending status refers to provisional patent applications filed with the USPTO.

Blockchain Registration, Verification & Enhancement provided by NewsRamp™

This contant was orignally distributed by Newsworthy.ai. Blockchain Registration, Verification & Enhancement provided by NewsRamp™. The source URL for this press release is Ignored Warnings: How VectorCertain Solved OpenClaw's Security Crisis.

{site_meta && site_meta.display_name} Logo

Newsworthy.ai

Newsworthy.ai is a different kind of newswire, built for the way news is consumed today. Created by the founders of PRWeb, Newsworthy.ai combines traditional newswire distribution features with influencer marketing, blockchain technology and machine learning to increase the visibility, engagement and promotion of your news.