Show HN: Code+=AI – build AI webapps in minutes by having LLM complete tickets
codeplusequalsai.comThe goal is to resolve frustrations while coding using AI, such as irrelevant changes sneaking in, messy copy+paste from ChatGPT to your editor, and getting quick previews of what you're working on.
3min demo video: https://codeplusequalsai.com/static/space.mp4
The main problem I'm solving is that LLMs still kinda suck at modifying code. Writing new code is smoother, but modifying code is way more common and a lot harder for LLMs. The main insight is that we're not modifying code directly. Rather, Code+=AI parses your source file into AST (Abstract Syntax Tree) form and then writes code to modify the AST structure and then outputs your code from that. I wrote a blog post detailing a bit more about how this is done: https://codeplusequalsai.com/static/blog/prompting_llms_to_m...
The system is set up like a Jira-style kanban board with tickets for the AI to complete. You can write the tickets or you can have LLMs write tickets for you - all you need is a project description. Each ticket operates on only 1 file however; for changes requiring multiple files, the LLM (gpt-4.1-mini by default) can Generate Subtasks to accomplish the task in full.
I also provide a code editor (it's monaco, without any AI features like copilot...yet) so you can make changes yourself as well. I have a strong feeling that good collaborative tools will win in the AI coding space, so I'm working on AI-human collaboration as well with this.
There is a preview iframe where you can see your webapp running.
This was a very heavy lift - I'll explain some of the architecture below. There is also very basic git support, and database support as well (sqlite). You can't add a remote to your git yet, but you can export your files (including your .git directory).
The architecture for this is the fun part. Each project you create gets its own docker container where gunicorn runs your Python/Flask app. The docker containers for projects are run on dedicated docker server hosts. All AI work is done via OpenAI calls. Your iframe preview window of your project gets proxied and routed to your docker container where your gunicorn and flask are running. In your project you can have the LLM write a webapp that makes calls to OpenAI - and that request is proxied as well, so that I can track token usage and not run afoul of OpenAI (it's not bring-your-own-key).
My next goal is to let users publish their webapps to our Marketplace. And each time a user loads your webapp that runs an OpenAI call, the token cost for that API call will be billed to that user with the project creator earning a margin on it. I'm building this now but the marketplace isn't ready yet. Stay tuned.
Really big day for me and hoping for some feedback! Thanks!