
Learn more about AI & Tech every Tuesday with TryTami
👋 Welcome to TryTami’s weekly newsletter to stay updated on the latest in AI and tech.
Learn to increase productivity with AI and create new AI products by booking live sessions with trusted AI experts on TryTami’s upskilling marketplace.
Request a demo to get free access:
An inside look at starting an AI-native SaaS
We usually talk about AI news, tech trends, or what we’re hearing from other companies. This week, we’re taking an inside look at starting an AI-native SaaS company with TryTami’s CTO, Dean Hiller.
Dean has led engineering teams in previous roles as CTO and VP of Engineering. He was also part of the notifications team at Twitter, helping to deliver 150,000 notifications per second to half a billion users!
I sat down with Dean to talk about what it truly means to build an AI native SaaS company in 2025, and we pulled back the curtain on our own process.
You will probably recognize many of the challenges we discuss here if you’re growing your engineering team or figuring out how to train engineers to work with tools like Claude Code.
This is what the inside of building an AI-native SaaS company looks like.
What being AI-native really means
Most companies say they use AI in their engineering team. What they really mean is they use AI to generate or review code. Dean says this is a huge misconception.
Being AI native is not about using tools. It is about designing the entire engineering process around AI.
At TryTami, being AI-native means the assistant is involved in the entire development lifecycle, not just code generation.
This includes:
Designing systems
Breaking work into files
Creating guardrails for code quality
Running commands
Debugging
Reviewing for risk
Identifying architectural issues
Creating tests
Evaluating tests
Updating tests when systems evolve
Dean put it: “Most people are using AI to code. But you want to use it to create guardrails and follow rules.”
He also pointed out that many engineers dump their entire problem into AI in a single big prompt. This is not how AI native teams work.
Instead, you divide the problem into files and stages. You let AI work inside the architecture. You treat the assistant like a collaborator who works step by step.
The teams that embrace this approach start shipping faster within a few weeks.
Hiring in the age of AI: the hardest shift no one is talking about
The first topic Dean jumped into was hiring. Not scaling. Not infrastructure. Hiring.
According to Dean, hiring engineering talent today is harder and more nuanced than at any point in his career. And the challenge is not that engineers are underqualified. It is that the definition of qualified has changed.
He told me something that surprised me. A lot of engineers now look strong on paper. They understand unit tests, they can build APIs, and they can ship. But they do not know how to work with AI productively.
Some engineers barely use AI at all. Others overtrust it. Both approaches slow teams down.
Dean described two common situations:
1. Engineers who do not trust AI
These are often junior engineers. They write everything by hand, avoid AI, and miss out on 10x acceleration. They also do not learn how to ask the right questions or structure prompts for tools like Claude Code. In a modern software team, this is a significant gap.
2. Engineers who trust AI too much
These engineers accept whatever AI gives them. They paste code into the repo without understanding what it does. They do not validate tests. They let AI generate large files without checking edge cases or system impacts.
So the hiring bar has shifted. It is no longer about who can code faster. It is about who can think clearly while working with an AI partner that is becoming more capable every month.
Dean said it this way: “We need people who know how to use AI, but also know when not to trust AI.”
That sounds obvious until you try to hire for it.
Is AI a junior engineer or a senior engineer?
The big question every team must answer: Is AI a junior engineer or a senior engineer?
This was one of the most interesting parts of the conversation.
Ten years ago, AI coding tools were autocomplete on steroids. Everyone treated them like junior engineers. They helped with scaffolding, boilerplate, and small snippets.
But now? Claude Code can read your entire codebase, run commands, update files, and reason across systems. It is no longer a junior dev. In some cases, it behaves more like a senior engineer who knows your software architecture.
So Dean asks a question every team will have to answer:
Is AI now a junior engineer or a senior engineer? And what happens when AI becomes a true senior engineer?
At TryTami, the answer changes depending on the task:
For raw code generation, AI is like a very fast junior.
For architecture decisions, it is more like a senior reviewer.
For refactoring and understanding legacy code, it is sometimes better than both.
This leads to surprising dynamics. For example, an engineer may work with AI to write new functionality, then let the AI review its own work with stricter guardrails, almost as if there were two engineers in the loop.
Dean’s view is that engineering leaders should start planning for a world where an AI can take on senior engineer responsibilities, including:
Reviewing complex changes
Identifying risk
Reasoning across files
Proposing architecture patterns
Generating tests and validating them
But this only works if the human engineer acts as the AI's product owner. Not a passive recipient, but an active director.
How TryTami’s engineers learn to work with AI
When we hire engineers, they often fall into one of two camps:
Use AI rarely or not at all
Use AI heavily but without understanding what it is doing
So we built a training process that helps engineers build judgment, not just speed. Here are three parts of the process that Dean says matter most:
A. Pair programming with four different people in eight business days
Every new engineer pairs with four teammates across their first eight working days. There is a rest day between each pairing.
This creates several benefits:
Rapid exposure to different working styles
Social connection early on
A faster path to understanding our AI native workflow
The ability to see how others use Claude Code
Engineers consistently say this is the best onboarding they have ever had.
B. Formal pair programming every five weeks
Every engineer pairs formally with someone every five weeks. But informal pairing happens organically all the time.
Pairing is one of the most effective ways to calibrate how to use AI well. When two people use Claude Code differently, they learn from each other very quickly.
C. Bring in external instructors when needed
If we cannot fill a skills gap internally, we bring in an expert.
Dean used the example of needing deeper expertise in a framework like LangGraph. No one knew it well. So this is where finding an internal subject matter expert or instructor is essential.
Both peer-to-peer learning and expert-to-team training are much more effective than any documentation or e-learning video.
Training engineers not to over-trust AI
One of the most subtle but important parts of Dean’s philosophy is that AI should never replace engineering judgment:
AI can produce tests, but engineers still have to check if those tests make sense.
AI can refactor code, but engineers still need to ensure the system still behaves correctly.
AI can identify risks, but engineers must review them.
Dean pointed out that AI-generated code today is fast, but not always stable. Old programming practices produced 95 percent stable output. AI might not be there yet.
You must validate what it gives you. He said, “AI can generate your tests, but you still have to watch it. You still have to confirm it is a valid test.”
This is why retrospectives matter more than ever. You need frequent reflection loops that ask:
Did AI generate something incorrectly?
Did we validate it properly?
Should we adjust our prompts or workflows?
Where does AI need more guardrails?
What’s next for AI-native SaaS companies?
The biggest shift ahead will be the moment AI becomes a true senior engineer. Dean believes this will happen sooner than people expect.
When that happens, the role of a human engineer changes. They become more like an AI product manager. They orchestrate, design, validate, and guide.
This is why Dean looks for engineers who can grow into product thinkers over time.
People who understand systems, ask good questions, and can direct AI through complex work. Teams that build this muscle now will be years ahead.
Right now, we’re focused on three key updates:
Improving the user experience for booking courses with fewer manual steps.
Releasing a public catalogue of trending AI and tech courses that can be customized.
Improving the chat experience for streamlined communication with instructors to improve training quality.
If your team is beginning to use AI or is struggling to get the most out of it, request a demo to solve AI skills gaps more quickly with TryTami:
Until next Tuesday,
Kelby, Dean, & Dave


