AI
Practical guides for developers working with AI tools. Prompt engineering, custom instructions, and integration patterns for ChatGPT, Claude, and other LLMs.
AI Coding Agents: A Practical Workflow Guide for Real Projects
How to work effectively with AI coding agents like Claude Code, Cursor, and Windsurf. Covers the agent loop, when to delegate vs direct, context management, multi-step tasks, and the habits that separate productive agent use from expensive frustration.
MCP (Model Context Protocol) Explained: How AI Agents Connect to the Real World
A practical breakdown of the Model Context Protocol (MCP): what it is, how the client-server architecture works, why it exists, and what it means for AI tool integration. Includes examples, a comparison with function calling, and an honest assessment of the current state.
Prompt Engineering Patterns That Actually Work - Beyond the Hype
A practical guide to prompt engineering patterns that produce consistent results: structured output, chain-of-thought, few-shot examples, role framing, and constraint-based prompting. No magic tricks - just techniques that hold up across real tasks.
AGENTS.md Makes Your AI Coding Agent Worse - and Now There's Research to Prove It
ETH Zurich's research on AGENTS.md files confirms what I discovered the hard way: bloated custom instructions make AI coding agents slower, more expensive, and less effective. A breakdown of the paper's findings, why context files backfire, and what actually works.
Custom Instructions for AI Assistants: How to Write Them Without Wasting Money
A practical guide to writing effective custom instructions for ChatGPT, Claude, and Cursor. Covers what happens inside every prompt, how instructions inflate token costs, prompt caching, and a comparison of bloated vs lean instruction sets with real token counts.
RAG Document Assistant: Answer Questions from Your Own Docs with Ollama, ChromaDB and Docker
Build a local RAG document assistant that reads .txt files, indexes them with vector embeddings, and answers questions using a local LLM — all without a cloud API. Includes a FastAPI backend, a minimal browser UI, and a full Docker Compose setup.
Free Local LLM in Docker: Build a Customer Feedback Analyser with Ollama and Pydantic
How to run Ollama in Docker Compose, pull a model on first start, and build a Python CLI that reads customer reviews from CSV, clusters them by theme, and generates a structured report — using Pydantic schemas and system/user message separation. No API keys, no monthly bills.