The fundamental challenge: Context vs. prompts
Most developers confuse prompts with context. A prompt tells the AI what to do right now. Context teaches it who you are, what your project does, and how you work. Without proper context, even the best prompts produce mediocre results. Think of it this way: Would you hire a developer, point them at your codebase, and immediately ask them to implement a payment system? Of course not. You’d onboard them, explain the architecture, share coding standards, and review similar implementations. AI assistants need the same foundation.Building your knowledge foundation
Before any coding session, your AI assistant needs two types of knowledge: project-specific information and team conventions. This isn’t about dumping your entire codebase into a context window, that’s like giving a new hire 10,000 pages of documentation on their first day. Instead, you need structured, purposeful context.The README.md: Your project’s story
Your README should answer the questions a new developer would ask on day one. Here’s a template optimized for AI comprehension:The AGENTS.md: Your team’s DNA
While README describes what your project is, AGENTS.md defines how you build it. This file contains your team’s coding philosophy, standards, and preferences—essentially your team’s DNA that the AI assistant needs to internalize.Comprehensive AGENTS.md template
Here’s a battle-tested AGENTS.md structure that covers all aspects of team conventions. This example uses specific technologies and stacks so it needs to be tailored to your needs!Team-specific customization
Different teams have different needs. Here are specialized sections you might add:For Frontend Teams
For Microservices Teams
For DevOps-Heavy Teams
For Claude Code users, create a symlink from CLAUDE.md to AGENTS.md:
Organizing knowledge for larger projects
As projects grow, flat files become unwieldy. You can organize your context hierarchically:Phase 1: Explore - Teaching your AI to understand
The Explore phase transforms your AI from an outsider to an informed team member. This isn’t about writing code, it’s about building understanding. Most developers skip this phase, jumping straight to implementation, then wonder why the AI misses obvious patterns or violates architectural principles.The exploration mindset
When you explore, you’re explicitly preventing the AI from coding. This counterintuitive approach forces deeper analysis and prevents premature implementation based on incomplete understanding.Example: Exploring a payment service
Advanced exploration with Claude Code
Claude Code’s exploration capabilities extend beyond simple file reading. Use these commands to build comprehensive understanding:You can use
Shift + Tab in claude to switch from plan mode to classic.Multi-file exploration patterns
For features spanning multiple modules, structure your exploration systematically:Context preservation between sessions
One challenge with AI assistants is maintaining context across sessions. Document your exploration findings:Phase 2: Plan - Architecting before implementing
Planning transforms vague requirements into concrete, testable specifications. This phase leverages AI’s ability to think systematically while keeping you in control of architectural decisions.Strategic thinking modes in Claude Code
Claude Code offers multiple thinking modes, each consuming different token budgets:- Think (4,000 tokens): Quick planning for simple features
- Think hard (10,000 tokens): Standard planning for most tasks
- Think hardest (32,000 tokens): Complex architectural decisions
- Ultrathink (customizable): Comprehensive system design
The planning template
Effective plans follow consistent structure. Here’s a template that works across different feature types:Example: Planning a multi-tenant feature
Let’s walk through planning a real feature—adding multi-tenancy to an existing application:The “My Developer” review technique
After generating a plan, use this psychological trick to get better critique:Risk-based planning strategies
Not all features require the same planning depth:Low risk (simple CRUD, UI changes)
Medium risk (integrations, data migrations)
High risk (security, payments, data privacy)
Storing the specifications
When developing a new feature, you may find useful to actually keep the results from the Explore and Plan steps for future use. One way to achieve this easily is to ask your AI assistant to write the specifications for that feature:Phase 3: Execute - From plan to production code
Execution transforms your carefully crafted plan into working code. This phase requires different strategies depending on your confidence level and the code’s impact radius.The execution spectrum
Execution isn’t 1 or 0, it’s a spectrum from full automation to careful supervision:Watched execution (high-risk changes)
Guided execution (medium-risk)
Autonomous execution (low-risk)
Test-driven development with AI
TDD with AI assistants requires explicit instructions to avoid the common trap of mock implementations:Example: Implementing a complex service
Let’s implement a document processing service with proper error handling and testing:Handling large file modifications
Claude Code excels at modifying large files without breaking existing functionality:Incremental execution with checkpoints
For complex features, build incrementally with validation checkpoints:Quality assurance: Catching AI hallucinations and anti-patterns
AI-generated code requires specialized quality assurance beyond traditional linting. The challenge isn’t only syntax, it’s semantic correctness, security vulnerabilities, and architectural consistency.The following examples are based on a specific node.js project. While they give you a good understanding as to what should be present, you should build your own templates for this!
Essential TypeScript/Node.js toolchain
Configure your project with AI-specific quality gates:ESLint configuration for AI code review
Automated testing with Vitest
AI-specific code review checklist
Train your AI to self-review with your tailored checklist:Continuous integration for AI workflows
Set up GitHub Actions to catch issues before merge:Claude Code tips and tricks
After months of daily use, these techniques consistently improve productivity and output quality:The resume pattern for long sessions
Plan mode for research without coding
Activate Plan Mode with Shift+Tab twice, then:Using MCP servers for enhanced capabilities
Configure Model Context Protocol servers in yourclaude_mcp_config.json:
Message queueing for complex tasks
Queue multiple prompts for intelligent processing:The “think harder” escalation pattern
Start simple and escalate complexity as needed:Platform-specific considerations: Upsun and modern PaaS
When developing with AI assistants for Cloud Application Platform environments like Upsun, include platform-specific context:Common pitfalls and how to avoid them
Pitfall 1: Context overload
Problem: Including entire codebase in context Symptom: AI gives generic, inconsistent responses Solution: Selective context loadingPitfall 2: Ambiguous requirements
Problem: Vague instructions produce vague code Symptom: AI implements features you didn’t want Solution: Specific, measurable requirementsPitfall 3: Ignoring failed tests
Problem: AI continues despite test failures Symptom: Broken code that “looks right” Solution: Explicit test gatesPitfall 4: Missing error handling
Problem: AI writes happy-path code only Symptom: Production crashes on edge cases Solution: Explicit error scenariosPitfall 5: Security shortcuts
Problem: AI takes unsafe shortcuts Symptom: SQL injection, XSS vulnerabilities Solution: Security-first instructionsConclusion: From chaos to productivity
The Explore, Plan, Execute methodology transforms AI coding assistants from unpredictable tools into reliable development partners. Success doesn’t come from better prompts alone. It comes from systematic context building, structured planning, and disciplined execution. Remember these key principles:- Context is king: Invest time in README.md and AGENTS.md. They pay dividends in every coding session.
- Exploration prevents pain: Understanding existing code before changing it catches issues that would take hours to debug later.
- Plans are contracts: Detailed plans become specifications that keep AI on track during implementation.
- Execution needs supervision: Match your supervision level to risk. Watch high-impact changes closely; let low-risk code flow.
- Quality gates catch hallucinations: Automated testing, linting, and security scanning catch AI mistakes before they reach production.