Today is our second day of deliberate practice in experimenting and learning to use AI-tooling for programming. Two weeks ago, I used Claude Code to start a completely new project from scratch. It took some serious wrangling, but it ended up in an impressive state (though I haven't touched it since then).

Importantly, that time spent got me more familiar with using Claude Code in ways I was already using aider. Since then, I've been using Claude Code way more for my development work on Semble and have been pleased overall with the experience.

Just like with aider, I spend a good chunk of my time assembling the context and prompt: pulling files into the context window, drafting my instructions, pulling in reference materials and examples, etc. In aider specific files get added by using the /add command, in Claude Code, they can be @ mentioned, so that flow is quite similar. The obvious benefit of Claude Code is that it can fill in some of the gaps by going through "reasoning" iterations and working backwards from build errors or test errors.

Today, I'm in the middle of finishing a Semble feature, so I'm going to use today's learnings to help me with that problem. I went back to my growing Semble collection and decided to check out an interesting looking YouTube video that's basically all about how to make the most of the limited context window:

Shout out to for being the first one to save it to Semble over a month ago!

Here are some highlights of the presentation

I found one of his concluding remarks to be quite interesting. Agentic tooling will become ubiquitous and extremely easy to use. The hard part is adapting workflows in teams. Our bi-monthly Fridays are our humble attempt at going through this transformation.

And a few related resources I found:

ai-that-works/2025-08-05-advanced-context-engineering-for-coding-agents at main · ai-that-works/ai-that-works
🦄 ai that works - every tuesday 10 AM PST. Contribute to ai-that-works/ai-that-works development by creating an account on GitHub.
https://github.com/ai-that-works/ai-that-works/tree/main/2025-08-05-advanced-context-engineering-for-coding-agents
ai-that-works/2025-10-12-unconference-sf/dex-ralph-demo at 92efc00e695c496944a5832a9cb291baa64661f6 · ai-that-works/ai-that-works
🦄 ai that works - every tuesday 10 AM PST. Contribute to ai-that-works/ai-that-works development by creating an account on GitHub.
https://github.com/ai-that-works/ai-that-works/tree/92efc00e695c496944a5832a9cb291baa64661f6/2025-10-12-unconference-sf/dex-ralph-demo
humanlayer/.claude/commands/research_codebase.md at main · humanlayer/humanlayer
The best way to get AI coding agents to solve hard problems in complex codebases. - humanlayer/humanlayer
https://github.com/humanlayer/humanlayer/blob/main/.claude/commands/research_codebase.md

The talk went over a lot of information, and it will take multiple practice sessions to soak it all in, but the main takeaways that I want to work with today are about iterating through the research, plan, and implementation phases while keeping the context window as small as possible (keeping it in the sweet spot of ~80k tokens). That is:

  • Research: navigate through the codebase, assemble reference context (libraries I know I'll be using, portions of documentation for that), and build an understanding of the current state of the codebase and where the new code will fit in.

  • Plan: using the research phase output (which may take a few iterations), come up with the concrete changes to make. What specifically needs to change or get added? Will Claude Code know where to look? What does success look like? This will take iteration, too.

  • Implementation: given the plan, can Claude generate the correct code? Did anything get missed?

The specific problem I'm working on in Semble is related to interoperating with . I have the real-time interop working by listening for margin records on the firehose and mapping them to our internal representation of cards and collection. The next step is to have historical margin data backfilled. After looking through how margin did the equivalent for historical Semble data (btw idk how the heck shipped that feature so fast, massive props to them them for that!), I've opted for a similar strategy: whenever a card gets added to Semble (whether from us or margin), check if that accounts repo has been synced with our AppView. If not, fetch all of their at.margin.bookmark,at.margin.collection, and at.margin.collectionItem records and convert them to cards and collections.

One thing I learned is that 80k tokens context window feels quite small, because it's so easy for it to blow up. So iteratively using up context to compress the key details into md files so that when it comes to the implementation phase, we can start with a fresh context window and feed in the most distilled, clear instructions possible. This meant using the /context command often to seeing how much of a token profligate I was being.

One thing I'm practicing today is to spend more time reading through the code in the planning phase, to see if what is being suggested makes sense or if it needs any tweaks. And overall just spending more time refining prompts and meta-prompts.

That's it for today, the margin interop is still in progress but I've made good progress on it!

Back in 2 weeks. 👋