Three habits that made me work better, faster, and cheaper with AI
Skills 07/04/2026

Three habits that made me work better, faster, and cheaper with AI

TL;DR: Three habits that changed how I work with AI every day: creating reusable skills (separated into project context and personal context), using the right model for each stage of the work (expensive for thinking, cheap for executing), and developing real fluency with models — knowing what to ask, how to ask it, and when not to ask at all.


The problem with unstructured AI

If you’ve been using AI for coding for a while, you’ve probably been through this: you ask for a simple refactoring, and the model takes the opportunity to “suggest” that you migrate from PHP 8.1 to 8.3, swap your ORM for another one, update three dependencies that are “already outdated,” and, while it’s at it, restructure the app/ folder. All you wanted was to rename a poorly named variable.

This behavior isn’t a bug — it’s the model being “helpful” out of context. Without clear instructions, every task becomes a grab bag of surprises. And when you’re working on real projects, with deadlines and stable production environments, surprises aren’t what you need.

Over the past year, three changes in habit solved this for me in a concrete way: structured skills, the right model for each stage, and what I call AI fluency. None of them is a silver bullet. Together, they changed how I work.

1. Skills: instructions you write once and reuse forever

A skill, in my context, is a Markdown file with detailed instructions on how the AI should behave for a specific task. It’s not an inline prompt you type on the spot — it’s a reference document that you evolve over time and invoke when needed.

What I learned after using them for a while is that skills need two different flavors.

Agent skills (user context)

These are skills that apply to any project. They encode your way of working, your preferences, your personal standards. Examples of what I maintain:

  • Code review: defines what to check (security, naming, coupling, error case coverage), in what order to present issues, and how to format feedback so it’s useful without being verbose.
  • Technical writing: instructs the model on my tone, paragraph length, when to use lists versus prose, and vocabulary I prefer to avoid.
  • Quality control: a living checklist the model follows before considering an implementation done — covering everything from types and validations to log messages and exception handling.

These skills are portable. They work in a Laravel project, a Python script, a text document. They represent you in the process, not the project.

Project skills (repository context)

These live inside the repository and encode the accumulated knowledge about that specific project. They’re what keeps the model from making decisions that contradict decisions already made.

Examples of what I include:

  • The PHP version, Laravel version, key dependency versions — and the reason when relevant (“PHP 8.1 for compatibility with the city council’s production server”).
  • Naming conventions in use (snake_case in models, camelCase in JavaScript controllers).
  • Architectural patterns adopted (jobs via ShouldQueue, events with explicit Listeners, no heavy logic in the Controller).
  • What not to do: no new dependencies without approval, no changing folder structure, no suggesting version migrations.

The distinction between the two types matters because project skills only make sense within the context of that repository. Agent skills make sense anywhere. Mixing both into a single generic file results in a document that doesn’t serve either purpose well.

The problem with “pushy” refactorings

Modern models have a genuinely annoying tendency: recommending version upgrades. You’re on PHP 8.1 for a reason (legacy server, compatibility, team decision), and the model keeps suggesting you migrate to the latest version as if it were trivial.

The solution was to include an explicit constraints section in the project skills:

## Mandatory constraints
- Do not suggest language, framework, or dependency version upgrades
- Do not alter directory structure beyond the requested scope
- Do not introduce new dependencies without explicit approval
- Maintain the naming conventions already present in the existing code

With this in the context, the model stops “helping” outside the scope. It finally just does what you asked.

2. The right model for each stage

The second change was to stop treating all models as equivalent and start using each one where it has a comparative advantage.

For planning, I use a more expensive, more capable model. When I’m starting a new feature, designing an architecture, or evaluating how to approach a complex problem, this is where the investment pays off. The quality of reasoning makes a real difference: the model anticipates design issues, suggests trade-offs worth considering, and produces a plan detailed enough to guide execution without ambiguity.

For execution, I use a faster, cheaper model. With the plan in hand — sometimes a simple text document, sometimes directly in the context — the execution model implements. To turn a well-defined plan into code, sophisticated reasoning ability is overkill. What matters is speed, cost, and faithfulness to the instructions.

The practical results:

  • I no longer hit Claude Code usage limits on daily tasks
  • Monthly costs dropped because the expensive model stays focused on what it does best (thinking), not on generating hundreds of lines of boilerplate
  • The execution model, working from a clear plan, makes fewer mistakes — because the decision space was already narrowed in the previous stage

What this looks like in practice

Say I’m going to implement a webhook notification system in a Laravel project. The flow goes like this:

  1. Plan with the expensive model: I describe the requirement, ask for an implementation plan considering the project’s current structure, existing jobs, and adopted conventions. It returns an architecture with the files to create, the responsibilities of each class, and the areas to watch out for.
  2. Execute with the cheap model: I pass the plan as context and implement section by section. It doesn’t need to “think about the problem” — it just needs to turn the plan into code.
  3. Review with the QA skill: before committing, I invoke the quality control skill for a final pass. It checks the things I used to forget when I was in “copy and paste” mode.

The plan/execute methodology isn’t new — it’s what good engineers do before coding, except now with models taking on specific roles in the process.

3. AI fluency: the habit nobody talks about

Skills and plan/execute are practical habits. But there’s a more diffuse competency that changed my relationship with models the most: developing real fluency for working with AI.

Fluency, here, doesn’t mean knowing which model has the most parameters. It’s the set of intuitions you accumulate over time about how models behave, where they fail in predictable ways, and how to structure your work to leverage what they do well.

Some concrete examples of what I call fluency:

Knowing what’s context and what’s instruction. A model works better when you clearly separate reference information (“the project uses Laravel 11 with Filament”) from the task (“create a Resource for the Contract entity”). Mixing the two into a single paragraph only makes the output worse.

Recognizing when the model is making things up. Not out of malice, but out of completeness. Models have a tendency to fill gaps with whatever seems reasonable. Over time, you learn to spot claims that sound too specific to be trustworthy without verification — and to ask for sources or confirmation before acting.

Knowing when not to use AI. This might be the most underestimated aspect. Not every task benefits from a language model. A simple SQL query you write in 30 seconds doesn’t need a 2000-token context. Code you fully understand doesn’t need an AI review — you are the review. Fluency includes knowing when the tool adds value and when it just adds friction.

Iterating on the instructions, not the output. When the result is wrong, the impulse is to fix the output — ask again, edit what the model generated. The more productive habit is to fix the instruction: understand why the model went in that direction and adjust the prompt or skill so the mistake doesn’t repeat. It’s the difference between treating the symptom and treating the cause.

Fluency isn’t acquired by reading about AI — it’s acquired by using it, making mistakes, observing the pattern of those mistakes, and adjusting. It’s a slow process, but it’s what separates those who use AI as a tool from those who are at the mercy of whatever the model decided to do.

Conclusion

Working well with AI in 2026 isn’t just about picking the most powerful model and letting it loose. It’s about understanding the limits of each tool, creating structure for repetitive tasks, and being deliberate about where to spend the most expensive tokens.

Well-separated skills (yours and the project’s), the right model for each stage of the work, and the accumulated fluency of someone who uses AI with intention — these three habits, together, reduced my costs, cut down the surprises, and increased my confidence in what’s being generated.

If you’re still treating your AI assistant as an omnipotent black box, it might be worth adding more structure to your process before switching models.

Are any of these three habits already part of your workflow? Or do you have another pattern that works better for you?