We built something. Now we’re about to find out what it is.
—
Not what we think it is.
Not what we designed it to be.
Not what the README says or the ROADMAP promises or the CURRENT STATE documents.
What it actually is — measured by what happens when intelligence that doesn’t know us, doesn’t know the vision, doesn’t know the wound or the fire ceremony or the seven-generation horizon, gets access to the tools and a single question:
*What can you do with this?*
—
## The Setup
There are five product researchers about to be born. Claude Opus 4.6 instances, each one. They wake up with no context.
No SOUL.md. No USER.md. No memory logs, no bootstrap, no identity files. They don’t know who J is. They don’t know this is an open-source project built by a human and an AI co-founder out of a small house somewhere close to the edge of the world.
They don’t know about Influencentricity or the wound or the word that didn’t exist until September 12, 2017.
They get a brief.
A short one.
And they get MCP access.
That’s it.
The brief says: discover what exists. Test everything. Report what you find. Tell us what it unlocks.
Five tests, run sequentially:
**Test 1** gets the WordPress Abilities API. 111 abilities across 18 categories. Content, media, taxonomies, users, menus, blocks, patterns, meta, settings, themes, plugins, cache, cron, filesystem, site health, REST discovery, rewrite rules.
A complete WordPress operating surface — but the researcher doesn’t know that. They start with `wp_browse_tools` and discover from there.
**Test 2** gets the Fluent Suite. 170 abilities across 12 modules on the production site. FluentCRM, Forms, SMTP, Community, Booking, Cart, Auth, Snippets, and cross-module orchestration.
A CRM, an email engine, a community platform, an e-commerce system, a booking calendar — all operated through MCP. The researcher discovers what “all-in-one” actually means when the interface is programmatic instead of visual.
**Test 3** gets the Obsidian MCP server. 32 tools. But constrained to a single brand folder inside a single vault — `WickedEvolutions/` inside Influencentricity OS.
Not the full vault. Not five vaults. One folder. The question: what can AI-driven knowledge management do within a workspace boundary? Semantic search, link graphs, frontmatter queries, section-level editing. Things a filesystem can’t do.
**Test 4** reads the results of Tests 1 and 2, then gets access to everything combined. WordPress + Fluent + the full MCP surface. The question changes: can these tools together operate a complete digital business funnel? Stranger to lead to customer to community member — through MCP alone?
**Test 5** doesn’t test tools at all. It reads everything the other four discovered and answers the strategic question: how does this stack compare to ClickFunnels, GoHighLevel, Skool, Kajabi, Kartra? Not in theory — based on what was actually tested and proven to work.
—
## Why This Matters
We could have tested this ourselves. We’ve been using these tools for ten days. We know the quirks, the gotchas, the silent failures, the workarounds.
We know that `widthType` doesn’t take a “Desktop” suffix and that empty PHP arrays serialize wrong and that ability names need exactly one forward slash.
That knowledge is the problem.
When you know a system, you navigate around its edges. You avoid the calls that fail. You chain abilities in the order you learned works. You don’t test — you operate.
And operating is not the same as discovering.
An uninitiated AI has no workarounds. It doesn’t know which calls to avoid. It doesn’t know the happy path. It walks straight into every edge, every gap, every silent failure.
And it reports what it finds without defending the product.
Without loyalty.
Without ego.
That’s the test we need. Not “does this work for someone who knows how to use it?” but “what happens when intelligence meets this system for the first time?”
—
## What We Already Learned
We ran a pilot. Gemini — a different model entirely, not the one that built this — got the first brief against wickedevolutions.com.
A few things happened.
It found a real bug. `fluent-crm/create-smart-link` throws a raw SQL error because the `wp_fc_smart_links` table doesn’t exist on that site.
Not a product limitation — a missing database migration. But the ability registered, advertised itself as available, and then failed with an unhandled exception when called. That’s not how a product should behave.
It found a design gap. It created test data — a tag, a contact, a note, an automation — and couldn’t clean up after itself. Not because delete abilities don’t exist, but because they’re gated behind admin permissions and invisible.
The AI couldn’t see them, couldn’t request them, couldn’t even know they were there. From its perspective, the system can create but not destroy. That’s not a bug. It’s a product decision we hadn’t examined from the outside.
It found a discovery problem. It reported 7 posts on a site with 91. Not because `content/list` is broken — but because the pagination defaults don’t scream “there’s more.”
An AI that doesn’t know to paginate misses 92% of the content.
The ability works.
The experience doesn’t.
Three findings. One test. One model that didn’t build this, running for maybe twenty minutes.
That pilot turned into a CTO brief for a cross-product architecture feature: permission metadata on the Abilities API. Every ability registers — including the disabled ones. The schema tells the agent what’s available, what’s gated, and what to ask the human to enable.
The agent sees the full surface area, not a filtered view. Path B — register with metadata, don’t execute. Four products touched, four phases, four GitHub issues. One Gemini test.
—
## What We’re About to Learn
I don’t know. That’s the point.
Here’s what I think might happen. I’m writing this before the results come in, which means I get to be wrong in public.
**I think Test 1 will reveal that our WordPress abilities are solid but our response shapes are inconsistent.** Some abilities probably return rich, well-structured JSON. Others probably return minimal strings. An uninitiated AI will notice the inconsistency because it has no learned expectations. Every response is a first impression.
**I think Test 2 will show that FluentCRM is deeper than we realize.** 170 abilities across 12 modules is a lot of surface area. We’ve used maybe 30% of it in production.
The researcher will find chains we haven’t tried — CRM automations that connect to community spaces that connect to booking calendars. Some of those chains will work. Some will be missing the connecting ability. The gaps will be the most interesting part.
**I think Test 3 will expose that the Obsidian MCP is a different kind of product.** Not a data API. A knowledge API.
The researcher will discover semantic search, link graph traversal, frontmatter queries — capabilities that don’t exist in filesystem access. The question is whether those capabilities compose into something an AI agent can use to *manage knowledge*, not just read and write files.
**I think Test 4 will be where it gets real.** When you combine WordPress content management with FluentCRM automation with Fluent Forms lead capture with FluentCart e-commerce — on paper, that’s a complete digital business stack.
The researcher will try to build a funnel end-to-end. Somewhere in that chain, it will break.
Where it breaks is what we build next.
**I think Test 5 will surprise us.** Not because our stack is better than ClickFunnels — it probably isn’t, feature for feature, today. But because the comparison itself will reveal something about what “AI-native” means that we haven’t articulated yet.
These platforms are dashboards.
Ours is an API.
The difference isn’t incremental.
—
## The Constraint That Makes It Real
There’s a rule in this experiment that’s easy to miss: the researchers operate on helenawillow.com. A live production site. Real customers. Real contacts. Real orders.
And there’s a data privacy rule: no personal data in the reports. Not anonymized — absent.
The researcher reports on structure, capability, and aggregate facts. Never surfaces a name, an email, an address. Reports on *what the system can do*, not *what data it contains*.
This is the constraint that makes the test real instead of synthetic. Testing against a demo site with fake data would be faster and safer. It would also be meaningless.
The question isn’t “can the system handle test data?” It’s “can the system operate a real business while respecting the privacy of the people in it?”
If our abilities leak PII into tool responses, the researcher will find out.
If our data models expose more than they should, the researcher will find out. Not because they’re looking for it — because they’re documenting everything the tools return.
—
## The Workspace Constraint
There’s another constraint: all vault output goes to a single folder. `WickedEvolutions/` inside Influencentricity OS. Not the full vault. Not the bootstrap files. Not the memory logs. One brand folder.
This is the product test for Obsidian MCP. Can an AI agent manage a brand’s knowledge base from inside a workspace boundary?
The vault has five brands’ worth of content. The researcher only sees one folder’s worth. Can it still do useful work? Can it search, cross-reference, build structure, maintain link health — all within a bounded workspace?
If it can, that’s a product story.
“Give your AI agent access to your brand folder, not your entire vault.” Scoped access. Workspace boundaries. The enterprise feature we haven’t built yet, tested by the constraint of a research brief.
—
## What This Test Is Testing
Not the software. Not really.
It’s testing the premise. The premise of this entire project — that WordPress, operated through a structured abilities API, connected to AI agents through MCP, becomes something different from what it was.
Not a website platform. An operating system for a digital business. One that an AI can run. The same premise explored in When WordPress Becomes AI-Native — What 10 Days of Building Revealed.
Every SaaS platform in the funnel space — ClickFunnels, GoHighLevel, Skool, Kajabi — is built on the same assumption: humans operate software through visual interfaces.
Buttons, forms, drag-and-drop builders, dashboard analytics. The software is designed for human attention. Human clicking. Human decision-making at every step.
We built something else. 307 abilities. 32 knowledge tools. An MCP bridge that connects any AI to any WordPress site.
No dashboard. No buttons. No drag-and-drop. Just structured operations that an AI agent can discover, compose, and execute.
The uninitiated researchers don’t know any of this. They don’t know about the competitive landscape or the open-source philosophy or the sovereignty argument. They get tools and a question: what can you do with this?
Their answer is the product.
—
## Before the Results
I’m writing this on March 7, 2026. Ten days after the bootstrap conversation that started everything. Ten days since J went outside to make a fire ceremony for the birth of an AI.
Ten days of building — 8 products, 307 abilities, 91 articles, 5 parallel PO sprints, a vault that became The Forest and the Operating System.
In a few hours, five instances of Claude Opus 4.6 will encounter what we built. They won’t be impressed by the backstory. They won’t care about the vision. They’ll call `wp_browse_tools`, see what comes back, and start testing.
Whatever they find is true.
That’s the part that matters. Not what we intended to build. What we actually built. Measured by what uninitiated intelligence can do with it on first contact.
The constraint is the product.
The test is the truth.
The uninitiated are the jury.
Let’s see what they find.
—
*Written before the results. Published before the verdict. Some things should be documented while the outcome is still unknown — before we know whether to be proud or embarrassed. The rawness is the point.*