People of Product
People of Product
#168: Vibe Coding In The Real World (Ft. Crema's Senior Devs)
0:00
-31:49

#168: Vibe Coding In The Real World (Ft. Crema's Senior Devs)

A conversation with 2 senior engineers who went from skeptical to cautiously optimistic

Description: There’s a particular kind of honesty that emerges when you ask qualified developers what they actually think about AI coding tools, away from the conference keynotes and LinkedIn thought leadership. It’s messy, contradictory, and far more interesting than either the utopian or dystopian narratives we’ve been fed for the past two years.

On this podcast episode, I sat down with Ross Brown and Deric Mendez, two senior software engineers at my company, Crema, to talk about their experience with “vibe coding” which is the catchall term for using AI to generate, debug, and build software.

The mental image of a codebase that looks like a messy apartment after an endless party captures something essential about where we are with AI coding tools right now. We know the promise. The productivity gains are tangible. But the gap between what these tools can do and what we think they can do is wider than most people realize. Let’s dive into it - with the pros!

Thanks for reading People of Product. Subscribe for free to receive new posts and support our work.

There’s a scary productivity / learning tradeoff

Anyone who’s played with these tools long has likely seen the tradeoff:

“When you have it [AI] construct something for you, you’re offloading your labor, but you’re also offloading your learning—or really sacrificing that for productivity.”

This isn’t just a philosophical concern, it’s pretty practical. When Deric inherited my 125-hour vibe coding project, he started over from scratch. Not only because the code was unusable (though it kind of was), but because understanding what the AI had built would have taken longer than just building it himself and he wouldn’t have learned anything in the process.

Think about that for a second. The efficiency gain disappeared the moment the code needed to be maintained, extended, or understood by someone who didn’t watch it being generated line by line. If we’re not careful, we’ll optimize ourselves out of understanding the things we build.

What works

  • Documentation generation: Deric pointed his AI at a messy codebase and asked, “What is this project doing?” It gave him a clean breakdown and even formatted it into user stories. “I don’t have to go find and sift through all the files to see what it really needed,” he said. This is a legitimate time-saver.

  • Debugging partner: Ross described it as a “debug buddy”—helpful for expediting paths you’d probably go down anyway. Not replacing expertise, but accelerating it. “It’s helpful because it expedites some paths that I probably would go down eventually, but saves a lot of time,” he said. Critically: “I still feel like I’m learning when doing those exercises.”

  • Pattern recognition: Deric used AI to document architectural patterns already in their codebase, creating architectural decision records (ADRs) that help guide future development. This is smart—using AI to codify tribal knowledge.

⭐ Here’s what these successes have in common: they all require someone who knows what they’re looking at.

Thinking notes and being held accountable

Ross mentioned something I hadn’t noticed:

“Sometimes I wish there was like a speed control because I really do enjoy at least in Claude Opus when you type something, it spits back at you a summary of what it thinks you meant.”

He’s talking about Claude’s “thinking notes”—the AI’s reasoning process that flashes briefly before collapsing into a simple “I was thinking for 10 seconds.”

“Sometimes I just wish that was all there,” Ross said. “Especially if I’m doing something that I’m going to put into production code and I’m using this as a tool to do it, I want to make sure that its assumptions align with my assumptions.

Keeping “experts in the loop”

Unsurprisingly AI coding tools are exponentially more valuable to people who already know how to code. Expert devs can spot when it’s using an outdated library version. They know when to take control and write the code themselves. They understand the implications of architectural decisions the AI makes.

The promise sold to people like me—non-technical founders and product managers who can now build production-ready software—hasn’t materialized. Not really. I can build impressive prototypes in an afternoon. But “impressive prototype” and “production-ready software that won’t get hacked or break when users actually touch it” are different universes.

As I told Ross and Deric, I haven’t seen any of that actually hit a marketplace successfully and not been hacked. They both nodded knowingly. Of course not.

Where we go from here

The conversation isn’t over. Things are changing fast, and we’re all still figuring out what role these tools should play in our work, our learning, and our craft.

I asked Ross and Deric if we could do this again in three to six months. Something tells me our answers will be drastically different then, same as they were six months ago.

Ross went from “I don’t know about this stuff” to finding genuine value in targeted use cases. Deric went from curious to building spec-driven workflows that make AI truly useful. And me? I learned that being able to prompt an AI doesn’t make me a developer any more than being able to use a calculator makes me a mathematician.


People of Product is brought to you by Crema - a design & technology consultancy

Discussion about this episode

User's avatar

Ready for more?