Ryan Austin runs a payroll SaaS solution in the Bahamas. He uses AI agents for customer support triage, feature scaffolding, and automated issue resolution. He has even implemented a rating system for issues, allowing agents to autonomously tackle specific tasks based on my available token budget.

Four years ago, none of this would have been possible.

Ryan Austin

The gap

In 2021, Ryan had a real business idea and enough Python to be dangerous. He was building alone, with no one to tell him whether his architecture was sound or whether he was creating a hobby project or something that could actually run a business.

He didn't know what he didn't know.

We did 1:1 coaching together. He shipped his first SaaS MVP. And the foundation he built then is paying off now in ways neither of us anticipated.

The foundation

When I asked Ryan what coaching gave him that made the payroll app possible, he was direct:

"Delivering this product would have been impossible without that coaching. It provided the necessary space to brainstorm and, more importantly, provided validation from someone who had shipped products at a high level. Coaching didn't just help me execute a plan; it gave me the confidence that my architecture was sound and that the product met a professional standard rather than just being a 'hobby project.'"

Later it all started to click:

"There were several moments where I initially followed certain patterns simply because I was told they were 'best practices.' The click happened when I stopped just following instructions and began to understand the why behind the architecture. I realized that with a solid structural foundation, the complexity of a system like payroll wasn't an obstacle, but a series of solvable logic problems."

Understanding why patterns exist is what makes them useful. Without that, you're following rules you can't adapt when circumstances change.

The iOS test

Here's what Ryan told me when I asked about the link between coaching and AI effectiveness:

"I've never had coaching for iOS development, and despite having AI tools, I don't feel comfortable building a native app because I lack the fundamental mental model of the folder structures and utilities. My Python coaching, however, gave me the ability to 'see under the hood.' Because I understand the core principles of Python and Django, I can effectively direct the AI. Without coaching, you're just a passenger; with it, you're the navigator who actually understands the map."

AI tools take the path of least resistance. Ryan knows his app must stay Content Security Policy compliant because of his previous security audits. He constantly has to remind the AI to avoid inline scripts or styles. If you don't know what good software looks like (e.g. security, modularity, compliance), you'll get something that works today but breaks tomorrow.

How Ryan uses AI tools today

His workflows:

Customer support: "Payroll is complex, and our support tickets often involve nuanced inquiries about tax calculations or labor law compliance. I use agents to ingest these emails and run tests against the live logic to see if a bug exists or if the system performed as intended. This results in faster, more accurate data-driven responses without breaking my deep-work flow."

Feature scaffolding: "I find that structural planning is 80% of the work; the agents handle that heavy lifting, allowing me to drive the 'last mile.' I always stay hands-on for the final implementation because I need to intimately understand how the features I support actually function."

Issue triage: He's automated the low-hanging fruit by having agents review GitHub issues, test if bugs still exist, and propose fixes. As said, he rates issues so agents can tackle tasks autonomously based on his available token budget.

Ryan can now build onboarding and integration tools that were stuck in the backlog.

The TDD inversion

One habit from coaching stuck harder than any other:

"In the early days of AI, the models weren't quite reliable enough to write production code, but they were decent at code review. Today, the roles have flipped: I'm writing and reading less 'raw' code, but I am reading tests more than ever. You always emphasized that code will eventually break and bugs are inevitable. Consequently, the tests have become my 'source of truth.' By focusing on the test suite, I can verify the AI's output without getting bogged down in every line of implementation."

Nearly all his prompts now begin with "Using a TDD approach..." and end with a request to review the tests before any code is written. He calls it a "short-circuit manager's review", ensuring the logic is sound before committing to the build.

But there's a catch:

"I recently experienced a session where an agent modified some essential tax calculations. Because the agent was also responsible for the tests, it updated the test suite to match its own logical error, creating a 'self-validating' mistake. The tests passed, but the math was wrong. I only caught the error during a manual walkthrough because I am intimately familiar with the expected Bahamian tax outputs."

If you let AI write the logic and the proof of that logic simultaneously, you risk a closed loop of misinformation. You must remain the ultimate authority on expected outcomes.

Practical guardrails

  • Aggressive prompt editing: "If an agent veers off course, I don't keep chatting; I stop and edit the initial prompt immediately. It's a massive time and token saver."

  • Variable reasoning: Toggle reasoning levels based on stakes. High-stakes payroll logic gets maximum reasoning; UI tweaks get minimal settings.

  • Decision fatigue is real: "Using agents is cognitively taxing. It reminds me of the 'innovation tokens' concept: we only have so much mental bandwidth per day. Agentic workflows require dozens of high-level decisions every hour. I've found that 'resting' the context, and myself, results in much better output."

  • The trap of easy: "The friction of development has decreased so significantly that it's tempting to over-engineer or add 'nice-to-have' features simply because they are now easy to implement. However, every line of code, even AI-generated code, is a future liability. Just because it's easier to build doesn't mean it belongs in your product. Maintaining a lean, intentional roadmap is harder when the 'cost' of building feels like it has dropped to zero."

  • Context files: Ryan maintains an agents.md file that defines his system's context and standards, allowing new agents to get to work instantly without a long onboarding chat.

The compounding

Ryan's point generalizes: our tolerance for "good enough" is vanishing. Because agents make refactoring and profiling so much faster, we no longer have an excuse for slow queries or poor performance. We can tackle those painful refactors in a fraction of the time.

But the developers who benefit most are the ones who know what good looks like before the agent starts writing. The foundational skills compound.

Ryan went from building alone with no validation to directing AI agents with confidence. The coaching didn't teach him AI tools, they didn't even exist yet. It taught him the fundamentals that make any tool useful.

He went from passenger to a navigator who actually understands the map.