In the last article, I explained why I got sick of "AI Shuffle," the constant context loss when switching between ChatGPT, Claude, and Gemini.
I had a 135 MB export file sitting in my Downloads folder. The data was there. It just wasn't portable.
This article is about the architecture I chose to fix that.
Drive-as-hub Architecture
The first decision wasn't about features. It was about what I wouldn't do.
Two things shaped how I designed ChatShuttle.
First, I care about user privacy. If I'm building a tool that handles people's AI conversations — personal notes, work context, things they've thought through in detail — I don't want to be in a position where I could see any of that. The cleanest way to achieve that: don't store it.
Second, this is my first one-person company. I wanted to focus on the core value (making your conversations portable) and not spread myself thin. Running a server means infrastructure, security, uptime, scaling — a whole second job. If I could avoid that, I would.
So I asked: is there an architecture that solves both?
Here's what I landed on before writing any code:
The user's Google Drive becomes the hub. The browser extension writes to it. Local machines read from it. I'm not in the middle. Your AI chat history stays yours.
This meant:
- No ChatShuttle backend
- No database storing your conversations
- No API processing your text
Just your browser, your Drive, and whatever device you want to use next.
Those two constraints (privacy + operational simplicity) led to three design principles. I call them the Local-First 3 Red Lines:
- No server. The extension talks directly to the user's Google Drive. There's no ChatShuttle backend processing your conversations.
- No storing user content. I don't want to be in a position where I could see your chats, even accidentally. The data path goes from your browser to your Drive. I'm not a waypoint.
- No subscription dependent on hosting. If the business model requires a server, I'd be locking myself into ongoing costs — and locking users into ongoing payments. One-time purchase, local execution felt cleaner.
These aren't universal truths. Plenty of great products run on servers and subscriptions. But for this project, given what I was building and how I think about privacy, these lines made sense for me.
Writer / Hub / Reader
Here's the architecture that emerged. I think of it as Writer / Hub / Reader:
Writer = Browser Extension
The Chrome extension runs embedding locally using transformers.js. When you import a conversation, it generates vectors on your CPU. No API call. No round-trip.
The output is two things:
- Your chat data (PDF with images)
- An index file called
index.voy
Hub = User's Google Drive
Both the PDF and the index get synced to a folder called ChatShuttle_Memories in your Drive.
This folder is yours. You can open it, browse it, delete it. If you delete the folder, your data is gone. I have no backup because I never had access.
The Drive scope is drive.file — meaning the extension can only touch files it created. It can't read your other documents or photos.
Reader = Local Machine
On the reader side (for example, ChatShuttle's local companion for CLI agents), the index gets downloaded to a cache on your computer. Search happens locally using voy-search (a WASM-based vector engine).
The Device Authorization Flow handles authentication. When you authorize, the token stays on your machine. I never see your Google credentials.
The Chrome Web Store Validation
When I submitted to the Chrome Web Store, the first review question was:
"Does your extension execute code fetched from a remote server?"
The answer was easy: No.
Chrome's MV3 (Manifest V3) policy hates remote code. If you load JavaScript from a CDN at runtime, you're flagged. If you dynamically evaluate code from the network, you're rejected.
I'd already designed around this without knowing it was a requirement. The architecture fit the constraint.
That's when I knew the design was right.
What You Can Verify
Here's what you can verify yourself:
- Drive folder name: Look in your Google Drive after importing a chat. You'll find a folder called
ChatShuttle_Memories. I don't hide it. It's yours. - Index file generation: The
index.voyfile is built in your browser using WebAssembly. Open Chrome DevTools during import — you'll see the embedding model load locally, not from a server. - Permission scope: Check the Chrome Web Store listing. The actual scope is
drive.file, notdrive(full access). The extension only touches its own files. - MV3 bundling: All JavaScript is bundled at build time. There's no
fetch()pulling code from a CDN. Google's review confirmed this. - Device Authorization Flow: When ChatShuttle's local companion authenticates, it uses Google's device flow. The token goes directly to your machine. I don't proxy credentials.
- Delete test: Want to verify the "no backup" claim? Delete the
ChatShuttle_Memoriesfolder from your Drive. Your data is gone. If I had a copy, I could restore it. I can't.
I don't run a server, so I can't be hacked. I don't store your data, so I can't sell it. I don't need hosting, so I can charge a one-time fee instead of a subscription.
Local-first isn't just a privacy feature. It's an operational choice that simplifies the entire business.
Now the architecture is in place: browser writes, Drive stores, local machine reads.
So I went back to deal with my ChatGPT chat history: how do you actually restore a conversation into a different AI? What format do you "shuttle" it in? How do you keep all the context without hitting a context window limit?
If you've read this far, you already saw my answer: PDF. I was surprised myself when I realized it was the best option. I'll explain why in the next article.
Want to see this architecture in practice? The documentation shows how import, sync, and restore work end to end. Or read the export guide for a step-by-step walkthrough of how conversations move through this pipeline.