I built Codeblocks for myself because I wanted something more flexible—a chat client where I could control every aspect of how conversations work with AI.
The goal was to experiment with local LLMs and find novel ways they could help with my work. Along the way, it turned into a full desktop app with features I actually wanted: total privacy (everything on my machine), the ability to connect to any provider, smart context management that doesn't break long conversations, and deep customization.
While it's designed with local LLMs in mind, that's not a limitation—you can use any provider you like or mix and match as needed.
---
## Key Features
### Connect to Any LLM Provider
Use whatever works for you:
- OpenAI (GPT-4, GPT-4o, etc.)
- Anthropic (Claude 3.5 Sonnet, Opus, Haiku)
- Google Gemini
- xAI (Grok)
- OpenRouter (access to 100+ models)
- Local models: Ollama, LM Studio
- Any OpenAI-compatible API
Switch between providers whenever you want. Use different models for different tasks—GPT-4 for code, Claude for writing, local models when you want everything offline. Compare responses. No lock-in, just flexibility.
---
### Smart Context Management
Long conversations work the way they should—without falling apart or losing important details.
*How it works:*
- Recent messages stay fully intact
- Older content gets summarized automatically (you can use a cheaper model for this if you want)
- Images, code blocks, and attachments are handled intelligently
- Real-time visual indicator shows your token usage
*You get to choose the strategy:*
- Keep everything until you hit the limit
- Keep only recent messages
- Auto-summarize older messages
- Reference content without including the full text
The goal was to make it so you can chat through 100+ messages without thinking about token limits. The app figures out what to keep detailed, what to compress, and what to just reference.
---
### Conversation Workflows
*Forking:* See a message where you want to try a different direction? Just click it to branch off a new conversation from that point. The original stays intact, and you can explore freely.
*Regeneration:* Not happy with a response? Regenerate it—with the same model or switch to a different one. All versions stick around, so you can flip between them and see what works best.
*Multiple chats:* Keep several conversations going at once. Quick navigation through the sidebar makes it easy to jump between them.
*Model Control:* Configure which model to use per chat, set custom system prompts, and adjust temperature settings. Each conversation can have its own unique setup.
---
### Rich Text Editor
Powered by Lexical (Facebook's text editor):
- Full markdown support with live preview
- Syntax highlighting for code
- Drag-and-drop for images
Code blocks get syntax highlighting and one-click copying. Works great in both light and dark themes.
---
### Search & Organization
*Search:* Find anything instantly—across all your conversations, message versions, tags, everything. Results show up as you type.
*Tags:* Create custom tags with colors, throw multiple tags on each chat, filter by whatever you need. You can see which tags you're using most too.
*Filters:* Mix search with tags to zero in on exactly what you're looking for.
Everything lives in `~/.codeblocks/` as JSON files on your computer. That's it. No cloud sync.
Your API keys? On your machine. Your conversations? On your machine. Want to go completely offline? Works perfectly with local models.
Backing up is as simple as copying the folder somewhere safe. The file format is readable JSON, so you're never locked in.
---
### Vision Support
Works with vision-enabled models (GPT-4o, Claude 3.5 Sonnet, Gemini):
- Attach images to messages
- Smart context management for images
- Configurable image inclusion strategies
---
### Customization
- Light/dark/auto themes
- Custom colors and styling
- Per-model temperature and context settings
- Custom system prompts per chat
- Configurable logging levels
---
## Tech Stack
Built with:
- *Electron 30* for desktop integration
- *React 18 + TypeScript* for the UI
- *Vite 5* for fast builds
- *Lexical* for the rich text editor
- *Vercel AI SDK* for unified LLM access
- *Tailwind CSS 4* for styling
- *Ant Design* components
Architecture: Local JSON storage, type-safe with Zod schemas, real-time streaming support, clean IPC separation.
---
## Current Limitations
It's an alpha, so there are definitely rough edges:
- Mostly tested on macOS, so Windows/Linux might be quirky
- Schema changes might require manually migrating your data
- Error handling could be better in some spots
- No export/import UI yet (just copy the files manually for now)
- Everything's in English only
Basically, it works well for me, but your mileage may vary. Feel free to report issues as you find them.
---
## What's Next
Planning to add:
- *Custom tools from UI* - Define your own tools that LLMs can call, directly from the interface
- *Integrations from UI* - Connect to external services and APIs through a simple UI (no code required)
- *Custom tools (programmatic)* - Write your own tools with full control for advanced use cases
- *Chat folders* - Hierarchical organization with drag-and-drop
- *Export/import* - Markdown export, backup automation
- *Analytics* - Token tracking, cost estimates, usage stats
The idea is to make customization accessible—whether you want to click through a UI or write code, both options will be there.
---
## Feedback Welcome
This is an early alpha—still figuring things out. If you try it and have thoughts on what works, what doesn't, or what you'd like to see, please let me know!
Your input helps shape where this goes next.
---
## Who This Is For
Honestly? I built this for myself to experiment with local LLMs and try out ideas for how AI could help with my work. But it might be useful if you're:
- Someone who wants *full control* over how your AI conversations work
- *Experimenting with local models* and want a solid interface for them
- Looking to *switch between providers* or compare different models
- Into *privacy* and want everything on your machine
- The kind of person who likes to *customize things* and tweak settings
The flexibility is the whole point—use it however makes sense for you.
---
## Core Ideas
What I was going for:
1. *Your data stays yours* - Everything on your machine, under your control
2. *No lock-in* - Use any provider, switch whenever
3. *Context that works* - Smart management so long conversations don't break
4. *Flexibility* - Fork conversations, compare models, customize everything
5. *Built for local LLMs* - But works with anything
The whole project is about having options and being able to experiment freely.
Llm are great reflections. Issues I have come across too large of context confuse the llm.
Second since llm are non deterministic in nature how do you know if the quality went from 90% to 30% there is no test you can write. What if model provider degrades quality you have no test for it
I don't think this holds to scrutiny. This breaks down as soon as you get out of the idea phase.
Everyone has visions, and most great achievements today are the results of an amalgamation of them. The really big achievements are almost always the result of groups, not individuals.
People that idolize others have to turn a blind eye to everything surrounding that individual's achievement. It's simply easier to simplify big achievements as "this person was great". In reality though, it's a culmination of everything that came before that individual, the support around them, some of their decisions and much more.
In other words: monuments are a bad argument to evaluate if something is good or not. Especially in today's individualistic society.
> This breaks down as soon as you get out of the idea phase.
No, it does not. Most ideas fail, and a significant amount of "off the ground" projects fail.
To be successful long term, it requires vision and commitment. You can find people to grind on your vision, but you cannot make them see what you see.
The overwhelming majority of massive successful projects and companies have a key figure that drove them to success.
I'd wager there are very few examples (if any) of a massive and successful companies that began life as a startup with a 30+ person board, for example.
> You can find people to grind on your vision, but you cannot make them see what you see.
I call bullshit. If you can't explain your vision you're a shitty leader. Good leaders can share their vision and get people to buy into them, adding their visions on top of it.
This is an oversimplification of very complex human interactions. Humans seek the simple explanation, even when there isn't one.
>The overwhelming majority of massive successful projects and companies have a key figure that drove them to success.
s/a key figure/key figures/g
Every project that is big enough has multiple people driving them to success. This idea that the person at the top is the only one driving it is insane. It's individualistic propaganda. No, you did not build your company/product/achievement alone. It's self-serving bias being projected onto someone else.
>I'd wager there are very few examples (if any) of a massive and successful companies that began life as a startup with a 30+ person board, for example.
That's moving the goalpost. Nobody's saying you need 30 people for things to happen. Anything greater than 1 is, by definition, no longer an individual effort.
Take one of the most known examples: Apple. It took both Steve Jobs and Steve Wozniak to make it happen. After their initial success, it wouldn't have grown to today's behemoth without the thousands of others that came after and worked on each and every product to make them successful. It's always a group effort. We can simplify and say "Jobs did it" but it's absolutely inaccurate.
I don't understand why smart people need idols. I get why the masses need their figures, but once you think about it for 5 seconds it's obvious they are mostly myths.
> We can simplify and say "Jobs did it" but it's absolutely inaccurate.
It actually would not be. We got to witness Apple with Jobs, without and with-again... the track record unambiguously demonstrates how impactful a single visionary, highly motivated leader can be within an organization.
We're now seeing Apple without Jobs yet again, and while the stock price has coasted upwards under Cook, Apple has not been the "bold" "think different" Apple ever since. Present-day Apple largely relies on it's existing products and small iterations. Apple, under Cook, has produced nothing market-changing or revolutionary, like they consistently delivered under Jobs.
If you've got 30 people on a committee, or 30 people on your board, well, good luck with that (in any context.)
My business has always had 3 or 4 people as owners, and hence as the "executive committee". I've found that to be helpful. If you can't convince your fellow owners then it's likely a bad idea.
(We've also turned away good ideas, and executed on bad ideas, the group approach is not infallible.)
But it really helps to develop a vision beyond just "my idea this morning" before diving into implementation.
Seems like confirmation bias. We often than not like to lionize successful individuals, but plenty of visions are false or otherwise fail. On the other hand, we tend not to remember group efforts because it's harder to remember multiple individuals at a time. But surely they do exist. Was there a single visionary behind Bell Labs? Xerox PARC? The traitorous eight? The Apollo program? The Manhattan Project?
There are different types of group governance and almost all of them having an individual decision-maker at the front. Oppenheimer directed the Manhattan Project but he wasn't a "visionary individual," he had plenty of talented geniuses working under him.
I believe there would be a single polymath who has skills in multiple fields, such as Oppenheimer who was great at science and administration who can lead a group for successful execution. For Apollo moon mission, it was the US president vision at the foremost. Even for TSMC and other companies such people exist. I don't want to say that entire credit goes to an individual but leaders with multiple skills are necessary.
Nah, this is not true. This is just the narrative because someone want to get the honors or the stacks of cash. Apple wasn't created by Steve Jobs. Not even created by Steve Jobs and the Woz. They did start something that eventually became Apple. They had a lot of sway, but without all the other people that worked there and created all the important details it would have just been a pipe dream.
Seems like it's deliberately using "committee" as a pejorative to make a political point. Flip it to "you'll find no statues of groups of people" and it's patently false.
Conversely, I think this book is applicable to non-game software too. I could just as well have called this book More Design Patterns, but I think games make for more engaging examples. Do you really want to read yet another book about employee records and bank accounts?
That being said, while the patterns introduced here are useful in other software, I think they’re particularly well-suited to engineering challenges commonly encountered in games:
- Time and sequencing are often a core part of a game’s architecture. Things must happen in the right order and at the right time.
- Development cycles are highly compressed, and a number of programmers need to be able to rapidly build and iterate on a rich set of different behavior without stepping on each other’s toes or leaving footprints all over the codebase.
- After all of this behavior is defined, it starts interacting. Monsters bite the hero, potions are mixed together, and bombs blast enemies and friends alike. Those interactions must happen without the codebase turning into an intertwined hairball.
- And, finally, performance is critical in games. Game developers are in a constant race to see who can squeeze the most out of their platform. Tricks for shaving off cycles can mean the difference between an A-rated game and millions of sales or dropped frames and angry reviewers.
Performance part alone is worth reading about game development for non game developers.
Retail off the shelf machines these days are so powerful that it encourages sloppy design and development.
Games are just normal software dev dialed up to ten, though there are many problems that game developers enjoy not having to care about (and visa-versa). Attempting to make a basic 3D engine is probably a good exercise for all developers - even if it goes uncompleted.
You are trying to focus too much on money and control. Instead focus on experience, are you learning new things, are you somewhat enjoying the process.
There are ways to communicate your objections without triggering other person defensive response.
Am I learning new things? Yes. But not sure if I'm learning enough to be effective. There's still lot of industry jargon terms, processes, norms which I don't understand. I only understood part of IA doc that was shared, nor the PRD.
Am I somewhat enjoying the process? Until now, yes. I just don't like not knowing what's happening.
Very similar to Hinduism. Hinduism also describe multi verse, number of species on planet.
A lot of theory in science is around advance Civilization that existed before modern day human (they are usually referred to as Giants). Hinduism talk about how earlier human were larger in height and have been reducing since the beginning.
Dude, stop posting comments like this which is just ignorant and gives a bad name to Hinduism where the subject of "Cyclic Creation and Absorption of Universe" is well detailed.
The goal was to experiment with local LLMs and find novel ways they could help with my work. Along the way, it turned into a full desktop app with features I actually wanted: total privacy (everything on my machine), the ability to connect to any provider, smart context management that doesn't break long conversations, and deep customization.
While it's designed with local LLMs in mind, that's not a limitation—you can use any provider you like or mix and match as needed.
---
## Key Features
### Connect to Any LLM Provider
Use whatever works for you:
- OpenAI (GPT-4, GPT-4o, etc.) - Anthropic (Claude 3.5 Sonnet, Opus, Haiku) - Google Gemini - xAI (Grok) - OpenRouter (access to 100+ models) - Local models: Ollama, LM Studio - Any OpenAI-compatible API
Switch between providers whenever you want. Use different models for different tasks—GPT-4 for code, Claude for writing, local models when you want everything offline. Compare responses. No lock-in, just flexibility.
---
### Smart Context Management
Long conversations work the way they should—without falling apart or losing important details.
*How it works:*
- Recent messages stay fully intact - Older content gets summarized automatically (you can use a cheaper model for this if you want) - Images, code blocks, and attachments are handled intelligently - Real-time visual indicator shows your token usage
*You get to choose the strategy:*
- Keep everything until you hit the limit - Keep only recent messages - Auto-summarize older messages - Reference content without including the full text
The goal was to make it so you can chat through 100+ messages without thinking about token limits. The app figures out what to keep detailed, what to compress, and what to just reference.
---
### Conversation Workflows
*Forking:* See a message where you want to try a different direction? Just click it to branch off a new conversation from that point. The original stays intact, and you can explore freely.
*Regeneration:* Not happy with a response? Regenerate it—with the same model or switch to a different one. All versions stick around, so you can flip between them and see what works best.
*Multiple chats:* Keep several conversations going at once. Quick navigation through the sidebar makes it easy to jump between them.
*Model Control:* Configure which model to use per chat, set custom system prompts, and adjust temperature settings. Each conversation can have its own unique setup.
---
### Rich Text Editor
Powered by Lexical (Facebook's text editor):
- Full markdown support with live preview - Syntax highlighting for code - Drag-and-drop for images
Code blocks get syntax highlighting and one-click copying. Works great in both light and dark themes.
---
### Search & Organization
*Search:* Find anything instantly—across all your conversations, message versions, tags, everything. Results show up as you type.
*Tags:* Create custom tags with colors, throw multiple tags on each chat, filter by whatever you need. You can see which tags you're using most too.
*Filters:* Mix search with tags to zero in on exactly what you're looking for.
---
### Keyboard Shortcuts
Stay productive with hotkeys:
| Shortcut | Action | | -------- | ---------------- | | `⌘N` | New chat | | `⌘/` | Focus chat input | | `⌘⇧S` | Toggle sidebar | | `⌘T` | Tag manager | | `⌘F` | Search | | `⌘↵` | Send message | | `Esc` | Stop generation | | `⌘,` | Settings | | `?` | Show help |
---
### Privacy & Local Storage
Everything lives in `~/.codeblocks/` as JSON files on your computer. That's it. No cloud sync.
Your API keys? On your machine. Your conversations? On your machine. Want to go completely offline? Works perfectly with local models.
Backing up is as simple as copying the folder somewhere safe. The file format is readable JSON, so you're never locked in.
---
### Vision Support
Works with vision-enabled models (GPT-4o, Claude 3.5 Sonnet, Gemini):
- Attach images to messages - Smart context management for images - Configurable image inclusion strategies
---
### Customization
- Light/dark/auto themes - Custom colors and styling - Per-model temperature and context settings - Custom system prompts per chat - Configurable logging levels
---
## Tech Stack
Built with:
- *Electron 30* for desktop integration - *React 18 + TypeScript* for the UI - *Vite 5* for fast builds - *Lexical* for the rich text editor - *Vercel AI SDK* for unified LLM access - *Tailwind CSS 4* for styling - *Ant Design* components
Architecture: Local JSON storage, type-safe with Zod schemas, real-time streaming support, clean IPC separation.
---
## Current Limitations
It's an alpha, so there are definitely rough edges:
- Mostly tested on macOS, so Windows/Linux might be quirky - Schema changes might require manually migrating your data - Error handling could be better in some spots - No export/import UI yet (just copy the files manually for now) - Everything's in English only
Basically, it works well for me, but your mileage may vary. Feel free to report issues as you find them.
---
## What's Next
Planning to add:
- *Custom tools from UI* - Define your own tools that LLMs can call, directly from the interface - *Integrations from UI* - Connect to external services and APIs through a simple UI (no code required) - *Custom tools (programmatic)* - Write your own tools with full control for advanced use cases - *Chat folders* - Hierarchical organization with drag-and-drop - *Export/import* - Markdown export, backup automation - *Analytics* - Token tracking, cost estimates, usage stats
The idea is to make customization accessible—whether you want to click through a UI or write code, both options will be there.
---
## Feedback Welcome
This is an early alpha—still figuring things out. If you try it and have thoughts on what works, what doesn't, or what you'd like to see, please let me know!
Your input helps shape where this goes next.
---
## Who This Is For
Honestly? I built this for myself to experiment with local LLMs and try out ideas for how AI could help with my work. But it might be useful if you're:
- Someone who wants *full control* over how your AI conversations work - *Experimenting with local models* and want a solid interface for them - Looking to *switch between providers* or compare different models - Into *privacy* and want everything on your machine - The kind of person who likes to *customize things* and tweak settings
The flexibility is the whole point—use it however makes sense for you.
---
## Core Ideas
What I was going for:
1. *Your data stays yours* - Everything on your machine, under your control 2. *No lock-in* - Use any provider, switch whenever 3. *Context that works* - Smart management so long conversations don't break 4. *Flexibility* - Fork conversations, compare models, customize everything 5. *Built for local LLMs* - But works with anything
The whole project is about having options and being able to experiment freely.