ChatShuttle Series · Part 1 of 8

Why Did I Get Sick of "AI Shuffle"?

Feb 3, 2026

This series documents the entire process of building ChatShuttle from scratch as a one-person company. From the first pain point to the architecture, product decisions, pricing, and everything I learned along the way.

I'm writing this because I wish someone had written it for me. Not a growth-hack playbook. Not "how I made $10K in 30 days." Just a real record of what it takes to build something useful, alone.


The "AI Shuffle" Problem

I use three AI models daily. ChatGPT for planning and voice. Claude for code and analysis. Gemini for certain creative tasks and web search.

Each model has strengths. None of them is the best at everything. So I switch, constantly.

Here's the problem: every time I switch, I start from zero. The new AI doesn't know my project. Doesn't know my constraints. Doesn't know what I've already tried.

I was copying and pasting context between windows like it was 2005. "Here's what we discussed earlier..." "For context, the architecture is..." "I already tried X, so don't suggest it."

I called this "AI Shuffle." You use multiple AIs but lose context every time you switch. The conversations are siloed. The knowledge doesn't transfer.


The Turning Point

In late 2025, Google announced Gemini with a big upgrade. Plus Pro was on sale for $99. I wanted to try it for coding tasks, which meant moving some of my workflow from ChatGPT.

But here was the problem:

All my work context was in ChatGPT. Months of conversations. Project decisions. Uploaded screenshots. The entire reasoning chain for three different side projects. My AI chat history was locked inside one platform.

Switching wasn't the hard part. Losing was. That's vendor lock-in, except nobody signed a contract. You locked yourself in by accumulating context.


The Export

I went to ChatGPT Settings → Data Controls → Export. A few hours later, I got a ZIP file.

Here's what was inside:

  • conversations.json — 135+ MB, every thread I'd ever had
  • shared_conversations.json — shared links
  • model_feedback.json — rating data
  • chat.html — a viewer file (barely functional)
  • Actual image files — screenshots, DALL-E outputs, uploaded diagrams

135 megabytes of my thinking. And no way to use it in Claude, Gemini, or anywhere else. (I later wrote a complete guide to exporting ChatGPT conversations — including what the native export misses and how to keep your images.)


Why I Started Building

I searched for tools that could help. Found a few Chrome extensions that could "export" chats. Most just converted them to text files. None preserved images. None could restore into a new AI.

The gap was clear:

  1. Context limits: Long threads hit context window walls. You can't just paste 50 messages into a new chat.
  2. Switching breaks threads: Even if you summarize, the new model doesn't know your decisions, your tone, your constraints.
  3. Different models excel at different things: I wanted Gemini for research, Claude for code review, ChatGPT for voice. But I needed one continuous thread.

No tool did all three. So I started building one.


The First Prototype

I built the first version in about two weeks. A Chrome extension that could:

  1. Parse the ChatGPT export ZIP
  2. Extract conversations with images
  3. Package them into a format another AI could read
  4. Restore the conversation in Gemini's interface

It was rough. The restoration was fragile. The image handling was a mess. But it worked.

I moved a 40-message thread from ChatGPT to Gemini, including screenshots and code blocks. Gemini picked up where ChatGPT left off. For the first time, I had continuity.


What's Next

That prototype became ChatShuttle. The next article covers the first real architecture decision: why I chose Google Drive as the storage layer and refused to build a server.

If you want to see how the finished product works — the documentation walks through import, restore, and search step by step.