daita@system:~$ cat ./be_the_idiot.md

Be the Idiot: Why Asking Stupid Questions Is a Senior Engineer Move

Created: 2026-03-22 | Size: 9559 bytes

TL;DR

The most expensive thing in engineering isn't looking stupid in a meeting - it's building the wrong thing because nobody asked a clarifying question. luminousmen argues that senior engineers should behave like military pilots: dry, precise, annoyingly specific. Four questions ("What happens when it fails?", "How do we know it's working?", "What does done look like?", and "Can you show me an example?") prevent more production incidents than any amount of clever architecture.

The Cost of Sounding Smart

Here's a pattern every experienced engineer has seen: someone describes a new system in a meeting. It's vague in all the important places: which schema? what happens to existing consumers? what's the rollback plan? The room nods along. Nobody asks because nobody wants to be the one who "doesn't get it."

Then three weeks later, the team discovers they built the wrong thing.

The author's thesis is blunt: most engineering failures happen because someone was afraid to look stupid. Not because the technology was wrong. Not because the architecture was bad. Because the humans in the room prioritized appearing competent over achieving shared understanding.

This isn't a junior engineer problem. Juniors ask questions because they genuinely don't know. It's mid-level and senior engineers who fall into the trap - they have enough context to follow along, but not enough to catch the ambiguity. And they stay quiet because asking feels like admitting a gap.

Military-Grade Communication

The military has spent a century optimizing communication protocols where misunderstandings kill people. Two principles translate directly to engineering:

Principle of Duplication of Information: Repeat critical information in multiple forms. In code, this means types, docstrings, explicit parameter names, and tests all saying the same thing. The next reader should be able to reconstruct intent from any single channel. If your function signature says one thing and your docstring says another, you've created exactly the kind of ambiguity that breeds bugs.

Principle of Additional Information: Choose words and constructions that exclude misinterpretation. "We'll handle it later" is ambiguous. "We'll add retry logic in the error handler before the v2.3 release, tracked in JIRA-4521" is not. Being annoyingly specific costs nothing. Being ambiguously brief costs rework.

The read-back isn't because the receiver is stupid. It's because confirming mutual understanding matters more than sounding smart. Senior engineers who insist on read-backs aren't being pedantic - they've seen what happens when you don't.

Four Questions That Prevent Disasters

The author shares a mental checklist. These aren't clever questions. They're deliberately simple - and that's the point.

"What happens when it fails?" Not if - when. Every system fails. The author shares a war story: a team designed an entire data pipeline assuming S3 would have 100% uptime. When S3 had a 4-hour outage in 2017, their entire data platform went dark. Nobody had asked the stupid question: "What if S3 is down?" This one question surfaces missing error handling, missing monitoring, and missing rollback plans.

"How do we know it's working?" If there's no observable signal that the system is healthy, you're flying blind. This forces the conversation toward monitoring, alerting, and observability before the code is written - not as an afterthought.

"What does 'done' look like?" The number of projects where team members have different mental models of completion is staggering. One person thinks "done" means merged to main. Another thinks it means deployed. Another thinks it means validated in production for a week. Ask this early.

"Can you show me an example?" Abstract descriptions hide disagreement. Concrete examples surface it. When someone says "we'll normalize the data," asking for an example input and output reveals whether you're talking about the same transformation.

Precision Is Not Rudeness

There's a critical framing point here. "Why are we doing this idiotic thing?" and "Is there a particular reason we're using X?" get to the same information, but one assumes the other person is wrong and the other assumes you're missing context.

The author's stance: assume you're missing context, not that everyone else is wrong. Most of the time, the answer reveals something you genuinely didn't know. And when it doesn't, the question triggers a conversation that improves the design. Either way, you win.

This matters especially in cross-functional settings. When talking to product managers, designers, or stakeholders, engineering precision can sound like obstruction if it's not framed as genuine curiosity. The goal is shared understanding, not winning the argument.

The Connection to Decision Records

This philosophy pairs naturally with Architecture Decision Records. ADRs formalize the output of exactly these kinds of clarifying conversations - capturing the context, the options considered, and the reasoning behind a choice. If "be the idiot" is the practice, ADRs are the artifact.

Similarly, the emphasis on shared understanding and over-communication is the human foundation that makes good team management possible. You can't manage a team effectively if half the room is nodding along without actually agreeing on what's being built.

Be the Idiot With Your AI Too

Everything in this article applies double when your teammate is an LLM.

A human colleague might push back on a vague request. They might say "wait, which schema?" or "that doesn't sound right." An AI won't. Give it an ambiguous prompt and it will confidently, eloquently build exactly the wrong thing. It won't ask clarifying questions. It won't flag that your description of "done" is underspecified. It will just go.

This makes the four questions even more critical:

  • "What happens when it fails?" - AI-generated code loves the happy path. If you don't explicitly describe error cases, you'll get code that works perfectly in demos and breaks in production. Tell the AI what should happen when the database is down, when the input is malformed, when the API returns a 429.
  • "What does done look like?" - "Build me an auth system" will get you something. Whether it's what you needed depends entirely on whether you specified: OAuth or email/password? Session tokens or JWTs? What permissions model? The AI will cheerfully fill in every gap with plausible defaults that may be completely wrong for your context.
  • "Can you show me an example?" - Before asking an AI to build a data transformation, show it a sample input and expected output. This single act eliminates more misunderstandings than paragraphs of description.

The trap is that AI output looks competent. It's well-structured, well-commented, passes a quick glance review. This makes it harder to catch when the underlying assumptions are wrong. With a human teammate, a wrong assumption often surfaces as confusion or hesitation. With AI, it surfaces as a clean pull request that silently does the wrong thing.

The Principle of Additional Information is really a principle of good prompting: be annoyingly specific, exclude misinterpretation, state your constraints explicitly. The engineers who get the most out of AI tools are the ones willing to be the idiot in their prompts - spelling out what seems obvious, over-specifying what "good" looks like, and asking the AI to confirm its understanding before writing code.

The Math of Being the Idiot

Here's a back-of-the-napkin calculation. A team of five engineers spends three weeks building the wrong thing because nobody asked a clarifying question in the kickoff. That's 15 engineer-weeks of wasted work, plus the rework to build the right thing, plus the opportunity cost of what they could have shipped instead. Conservatively, that's 20+ engineer-weeks gone.

The cost of asking the "stupid" question? Five minutes of awkwardness in a meeting.

The ROI on being the idiot is absurd. Even if only one in ten clarifying questions catches a real misunderstanding, the expected value is overwhelmingly positive. The other nine times, you either learn something or confirm alignment - neither of which is a waste.

The Idiot's Creed

The best engineering teams don't just tolerate clarifying questions - they expect them. Over-communicate. Repeat things back. Draw diagrams. Show examples. Ask "what does done look like?" in every kickoff.

Your job isn't to sound smart in meetings. It's to build the right thing. And the fastest path to building the right thing is being willing to be the idiot who asks the obvious question that everyone else was afraid to ask.


References

  1. Be the Idiot - luminousmen (original source)
  2. A No-Bullshit ADR Framework for Teams That Actually Ship - Daita blog
  3. Good Team Management Makes Product Management Less Painful - Daita blog

daita@system:~$ _