Week Two: Side Coding
Building MealDeck — a game-like way to decide what’s for dinner — in short, focused bursts has been less about speed and more about learning how to work with an AI that’s quick, occasionally clumsy, and always in need of a steady hand.
Two weeks into building MealDeck in my spare time, I’ve settled into a rhythm — part curiosity, part experiment. I’m not chasing speed for its own sake, but I am interested in seeing how far I can get working alongside an AI for just eight hours a week.
MealDeck is a meal planning app with a small twist: instead of staring at a calendar and trying to fill in dinner slots, you play a swipe-based game that helps you choose meals and automatically builds your shopping list. It’s aimed at time-poor couples and families — the people who know “what’s for dinner?” is rarely a simple question.
Working with an AI: useful, but not magic
The code is as good as you want it to be. Let the AI’s output pass without question, and you end up with something a bit rubbish. Catch issues early, and you can steer it back into shape without too much trouble.
Commits become checkpoints. With an AI producing code quickly, unstaged work feels disposable — and often is. Wiping the slate clean is often faster than trying to get the AI to endlessly rework a flawed approach.
The AI tends to fix problems by adding more code, which isn’t always the right answer. Reading through its changes is essential, not just for catching mistakes but for spotting when it has modelled the problem in an awkward way. Often the cleaner solution is to rethink the approach rather than bolt on another patch.
Its ability to orient itself in a codebase is genuinely useful — it can search, cross-reference and connect dots far faster than I can — but it’s still my job to decide whether what it’s found is meaningful. Where it struggles is with the small, less tangible adjustments. In MealDeck’s card-swiping mechanic, for example, it built a prototype quickly, but couldn’t resolve a subtle shadow flicker that made the animations feel off. The solution ended up being a more deliberate shadow effect — an aesthetic judgement call, not an algorithmic one.
The working cadence
Most sessions follow a similar loop:
- Review the product requirements and write an implementation plan.
- Execute the plan.
- Review for mistakes — sometimes by me, sometimes by the AI itself.
- Repeat as needed, committing whenever there’s something worth preserving.
- Test the new functionality and note defects.
Time spent refining the plan always pays off — this is where human judgement has the most influence over the outcome. I’ve also found it useful to give the AI permission to ask clarifying questions, and to set up simple validation steps so it can catch basic errors without my intervention.
Progress so far
Session one was exploratory — a quick prototype, a few screenshots, and some initial product requirements. Session two produced a working version of the swiping mechanic. Session three was about pruning old code and iterating on animations. By session four, all four planned core features were present, if still rough. Sessions five and six began work on the backend — not strictly necessary for a prototype, but useful for surfacing challenges early.
The work is progressing steadily. Each two-hour block makes a visible difference, even if some sessions start with fixing the less careful decisions made towards the end of the previous one.
Limits as a feature
My AI subscription has usage limits, and I often hit them about an hour and 45 mins into a session. I’ve found this more useful than frustrating — it forces me to stop before I drift into unproductive tinkering.
The past fortnight has reinforced a simple point: AI can move fast, but speed doesn’t guarantee quality. It’s a capable partner for the mechanical work of producing code, but design decisions, prioritisation, and the subtle touches that make software feel right still come down to human judgement.