link

Logo for gchiesa.dev

Dealing with Mr.Know-it-all

SW Engineer is now Techincal Architect and Product Owner

I think everyone of us has a friend or less-friend 😎 which in Italian we call him “signor so tutto io” (Mr.know-it-all). It can be very annoying see he/she jumping on the conversation with a solution for everything, and in multiple occasions what this person recommends does not really make sense in the problem you are telling him. But - annoying properties apart - you can observe the enthusiasm and how he tries relentlessly to throw solutions at you. Just because he likes to be acknowledged that he knows everything.

Well, I have the same relationship with AI. Most of the time I don't involve AI into the question to find a solution for a problem, but on the opposite to get fascinated when it throws some idea that to me is new (not necessarily good one). I then take the chance to investigate, and check if "would that fit my problem?". It's a variation of the "random page" function in Wikipedia, where I try to fish at least some answer that is somehow in scope.

I’m recently using AI for a kind of "automated programming", with some specific guardrails. And I’m pretty positive with the result so far, and I'm learning a lot. Since I believe as human we must find every occasion possible to learn and after so many articles explaining how software engineering is doomed and humans are out of the loop now, I want to describe how I am in the loop and what is my workflow:

#CLAUDE.md

I use Claude Code, but I believe the main concepts here are applicable to any LLM agent of your choice.

The important part while creating the CLAUDE.md is to write as you are the Technical Software Architect cooperating with an extreme pragmatic Product Owner.
Use this approach for describing and decomposing the milestones in your project.

I believe this is the real secret sauce of the automatic programming, and it’s important to always approach the request to LLM thinking in two dimensions:

Given these two main roles, you need to produce a properly refined initial CLAUDE.md file.
It seems very tedious and boring, and indeed, you are not required to spend too much time to make it crystal clear.
In my workflow, I use Gemini 3 Fast for it (just the simple free account).
I prepare a very drafty document describing the architecture of the software to produce, listing the functional requirements I want to include together with the non-functional ones.
I also mention in this preliminary draft what technologies I would use and why that decision.

In the same CLAUDE.md then you add your rules for implementation in milestones. My typical example is the following (example for a Go-based software):

### Implementation approach with AI

This project uses a milestone-driven development process. For each milestone, a set of unit tests should be implemented.
The structure of the code should resemble the following:

main.go <-- main entrypoint
internal/... <-- internal packages
internal/cmd <-- internal package for command-line configuration 
pkg/ <-- external exported packages when these will be stable 
.golangci-lint.yaml <-- configuration for golangci-lint 
mise.toml <-- mise configuration
ci/scripts/ <-- mise scripts 

#### Workflow
1. Check `.junie/workflow/state/` for `*.completed` files
2. Find the first milestone in `.junie/workflow/milestones/` without a matching `.completed` file
3. Implement the milestone requirements
4. Run lint and testing with MISE. MISE should maintain the scripts in the dedicated folder with `+x` execution bit and the MISE toml file should be as clean as possible
5. If tests pass, create `<milestone_name>.completed` in `.junie/workflow/state/` 
6. If tests fail, log error in `.junie/workflow/logs/session.log` and request human input

**Important**: Never re-process completed milestones. The `.completed` files are the source of truth

Next, ask Gemini to review and refine for you the content for CLAUDE.md with the provided "drafty" information. You will see 99% of cases the outcome is already in a very good shape to save it - after properly reviewed it - and make it part of your CLAUDE guidelines.

#Time To Learn part I

Don’t forget to ask in the prompt for Gemini to revalidate your technical architecture decisions.

Sometimes we see everything as a nail if we only used a hammer.

And this is exactly one of the most important phases where your Mr. Know-it-all will probably suggest something different.
And now you should stop and do some Googling, research, and learn if what it's suggesting may make sense or not. If it's a new technology you never used, then it's a very valuable moment to expand your skills and breadth of knowledge.
Now you are learning thanks to that annoying guy.

#Bonus: ask CLAUDE.md to include proper manual testing

As part of the CLAUDE.md I also include in the draft for Gemini the request to implement the system testing structure. Typically with Docker Compose, so I can easily spin up a Docker Compose environment and see my software running and validate it works as expected (really useful especially if your software involves distributed consensus or real-time simulated traffic).

#Milestones and Product Owner dimension

Now you have a well-crafted CLAUDE.md file. You need to start breaking down the implementation. I use the milestone approach (as you can see above).
You now put on the hat of the Product Owner. You know it will take multiple sprint deliverables, and we know how to break down the implementation into dependent blocks.

Again, here I usually draft a description of what I know I would need to implement first, and then use Gemini to refine it into a proper milestone for Claude. The generated response from Gemini gets saved in the folder as e.g. 001-boilerplate-and-testing-infrastructure and in Claude Code I ask

Implement the milestone 001-boilerplate-and-testing-infrastructure

You will see quite a lot of code generated hopefully in the places that are familiar to you because of the implementation guidelines. At the end Claude will also create the completed file and usually it creates a report of the completeness in the body of that control file.

In most cases, the initial milestone is actually to build all the boilerplate code, bootstrap the Docker Compose testing infrastructure and validate everything starts with just a stub mock for your software that is yet to exist.

#Time to learn part II: refactor

AKA The most important phase of all

You already saved an enormous amount of time so far having Mr. Know-it-all throwing his view of code for you.
Even if everything works, not all of the generated code is actually properly thought through. At the end, LLM is just a very quick search-copy-paste-make-it-work from StackOverflow.

For this reason, the human touch is essential.
After a few milestones, typically 3 or 4 and after you have all the tests passing. It's the moment to learn by refactoring.
It's essential IMO indeed to "curb your enthusiasm" and instead analyze the code and refactor it in a way that makes the code digestible to me.

In my experience, at this stage I typically introduce new guardrails or libraries which make the code more maintainable (also for humans, e.g. using the great samber/lo in golang ). Or perhaps consolidate duplication that was introduced by AI to get things done.
You will be surprised how much you can still do with LLM-generated code, and it will be fun and educational to move stuff around, optimize for maintainability, and introduce better libraries only an experienced software engineer knows helps to scale.

You might think this is a waste of time.
To me, this is the most valuable moment of dealing with Mr. Know-it-all.

It's like a code review, where you are showing by example the LLM how you want things to be structured, what are the best patterns for maintainability you learnt in your extensive experience.
This will allow you to learn how the implementation is done, and it will build a more structured and solid foundation for the next iterations with LLM. Claude Code - indeed - tends to consider and align with pre-existing code patterns.
So better steer the implementation along the way.

#Should I skip this last part? I want my MVP

NO, Never. We know the world is built on top of MVPs.
Plus, if you really are a software engineer, probably this is the biggest joyful part.

Yeah, but we can ask AI to do this, what’s the point to keep us in the loop?

My relationship with AI is very critical.
I'm always on a very cautious standing point: I believe AI is an accelerator/multiplier for skills and mindset you already have. It basically types faster, searches faster, validates faster things that you would still do it yourself at your own pace.

If you don't have structure or a proper Software engineering mindset, you will get your lack of skills amplified, and you are going to produce probably something that works at best on your Raspberry Pi in your home lab, but that's it.

Additionally, AI is a special bubble. Yes, I'm an AI bubble believer: in the sense that I believe we are in a moment of time where companies are right in the corner of the street - like friendly pushers - giving us the best popping pills for free.

But this won't last. Economy and sustainability math is telling this more and more clearly (btw, you remember the zero-carbon footprint initiatives? completely disappeared because they bothered our pushers...)

At some point, they are going to ask you to pay the actual value for using an LLM. And it will cost a lot. More than we can pay even when we are a structured company.

In some (few) cases, we will pay; in some other cases, we cannot afford the actual future costs of an LLM. If you don't get a grip on the assets you generate with LLMs today, you will find yourself in a miserable corner later.

#FAQ

Why you use Gemini for writing specs and Claude Code for implementation?

There is no real science behind this decision. But I have the feeling out of experience that Gemini is very good at proof writing natural language and challenging decisions based on a perhaps more search engine domain of knowledge, coming from Google.

I got multiple times stunned by some ideas and recommendation that were not considered in other LLM.

On the other hand, I see Claude Code more smart in code implementation, with special consideration on what the current state of the code is and tendency to stick with the adopted implementation.