In January 2026, an open-source project called OpenClaw hit 100,000 GitHub stars in weeks. It let you control your entire computer from WhatsApp. Send a message from your phone, and an AI agent on your machine would write code, send emails, manage your calendar.
The name kept changing (Clawdbot, Moltbot, OpenClaw), but the adoption didn't slow down. Around the same time, Claude Code became the default coding assistant for a growing share of developers. Context windows were expanding to a million tokens. MCP servers were multiplying. The concept of "Skills," pluggable capabilities that agents can discover and use, was going from niche to normal.
I watched this happen and thought about my users.
The Users I'm Building For
In a previous article, I defined two groups.
Developers and one-person companies. They subscribe to multiple models (ChatGPT, Claude, Gemini), switching by task. They hit context limits constantly, especially with screenshots, logs, and code. They care about privacy: company code, client info, product strategy. What they pay for: "Don't make me restart the conversation. Don't make me re-explain context."
Content creators. Also multi-model users: one AI for brainstorming, one for polish, one for structure. Their context assets are persona, tone, topic lists, material references, images, iteration history. What they pay for: "Switch models without losing my voice, my materials, my thread."
Both groups share a trait. They don't use one AI. They use three or four. And increasingly, they're not just chatting in web browsers. They're using code agents, terminal tools, and AI-controlled workflows across their entire machine.
Which brought me to a bigger question.
The Memory Layer Question
ChatShuttle already captures and stores web AI conversations. The Chrome extension handles Snapshot, Sync, and Restore. I wrote about the capture cost in the previous article: the DOM scraping, the image tradeoffs, the ongoing maintenance.
But what comes out of that capture process is something specific: fragments of a user's thinking across multiple AI platforms. Debugging sessions in ChatGPT. Architecture decisions in Claude. Research threads in Gemini. All normalized, indexed, stored on the user's own Google Drive.
These fragments are valuable. But they were only accessible inside the browser. The search I built previously works in the side panel. Type a query, get results. Fine for browsing. Not useful when you're deep in a Claude Code session or managing your life through OpenClaw.
The question I started asking: how do I build an AI memory layer for my users? Not just storage. Not just search inside a browser panel. Vendors are shipping longer context windows and built-in memory features, but those memories live inside their platforms. That's vendor lock-in through accumulation, not through contracts. I wanted a layer that makes users' accumulated web AI conversations accessible wherever they're actually working.
Why I Built ChatShuttle Nexus as a Skill
The answer came from how these tools already work.
Claude Code, OpenClaw, Cursor, Antigravity: they all support Skills or MCP integrations. They're built to discover and call external capabilities. A developer using Claude Code can already type "search my codebase" and the agent handles it. An OpenClaw user can ask their agent to check email or book a flight.
What if the same interface could pull from your web AI chat history?
That's what ChatShuttle Nexus is. A Skill that acts as a bridge and router between your captured web AI memory and whatever agent you're working with. Open source. Local-first. Extensible.
The product form comes down to three properties:
Flexibility. Whether your workflow centers on Claude Code for programming or OpenClaw for managing your day, ChatShuttle Nexus plugs in the same way. MCP configuration for Claude Code and Cursor. Skills directory for OpenClaw and Antigravity. One Skill, multiple entry points.
Extensibility. Users can modify and extend the Skill to fit their workflow. The codebase is open. If you want to change how results are formatted, add filters, or connect to additional data sources, you can.
Value release. The conversations users have with web AIs are a form of accumulated knowledge. ChatShuttle Nexus makes that knowledge queryable from the tools where they spend most of their productive time.
How It Works
The Chrome extension runs a slimmed-down version of transformers.js in the browser. Every time you import or snapshot a conversation, it generates vector embeddings locally using Xenova/all-MiniLM-L6-v2 (quantized). The embeddings and metadata get stored as index.voy in a Google Drive folder called ChatShuttle_Memories. No cloud embedding API. No backend database. Local-first architecture: the extension writes directly to Drive using your own OAuth token.
ChatShuttle Nexus downloads that index.voy to a local cache (~/.chatshuttle/cache), typically 5-10MB for thousands of chats. From there, all search happens on your machine.
Authentication uses Google's Device Authorization Flow. You authorize once, directly with Google. ChatShuttle never sees your token.
Search is hybrid. ChatShuttle Nexus implements Reciprocal Rank Fusion (RRF): vector search through voy-search (WASM) for conceptual queries, plus keyword search through fuse.js for specifics. Both scores get combined.
For Claude Code, the MCP configuration looks like this:
{
"mcpServers": {
"nexus": {
"command": "node",
"args": [
"/path/to/chatshuttle-nexus/dist/cli.js",
"search",
"--json"
]
}
}
}For OpenClaw and agents that read a skills directory, a symlink is enough. The agent reads SKILL.md and knows how to call the search.
In practice: you ask your code agent "What did I discuss about authentication last week?" The agent calls ChatShuttle Nexus, searches your Drive index locally, and returns the conversation snippet. Your entire AI chat history across ChatGPT, Claude, and Gemini, searchable from your terminal. No tab-switching. No copy-paste. The extension handles permissions on specific domains only (gemini.google.com, claude.ai, chatgpt.com, drive.google.com). It does not request "all websites" permission.
What's Next
We've covered capture, storage, search, and the memory layer that connects them to code agents and AI workflows. The question that remains is distribution: getting from "it works on my machine" to "it's in the Chrome Web Store." Review policies, permission justifications, and what happens when a reviewer says no.
Ready to try it? Start with import, or explore the full documentation.