← Back to Home

Prompt Engineering Frameworks 2026

Effective techniques for guiding LLM behavior

RTF: Role, Task, Format

The most popular framework. Forces you to consider who the AI should be, what it should do, and how the output should look.

Example:
Role: You are a sports nutritionist specialized in amateur marathon runners.
Task: Create a 7-day meal plan for a 40-year-old man, 75 kg, training 4 times per week.
Format: HTML table with columns Day, Breakfast, Lunch, Snack, Dinner. Total calories per row.

TAG: Task, Action, Goal

Pure operational focus. Use when you know who the AI is and only need to specify what to do and why.

  • Task: the macro activity (e.g. "analyze this code")
  • Action: the immediate action (e.g. "find security bugs")
  • Goal: the final result (e.g. "I want an OWASP Top 10 list with the lines of code involved")

TASS: Task, Audience, Style, Structure

Best for content creation. The Audience parameter transforms output completely based on the reader.

Example: Explain what a blockchain is to:
• An 8-year-old child → story with notebooks and teddy bears
• A computer science professor → cryptographic hashes and proof of work
• My non-technical mother → example with recipes shared between friends

BAB: Before, After, Bridge

From classic copywriting, perfect for business problems and case studies.

Describe the problematic situation (Before), describe the desired future (After), and ask the AI to build the bridge.

Example: Before: client with 3% conversion rate. After: client with 8% conversion rate. Bridge: tell me which 5 priority interventions enabled the jump.

CARE: Context, Action, Result, Example

Favorite for professional analysis. The Example component (Few-Shot prompting) dramatically reduces errors.

Always include 2-3 examples of desired input/output when possible. Models under 100B parameters benefit more, but the effect stays positive across all model sizes.

CREATE: Character, Request, Examples, Additions, Type, Extras

The most complete framework for serious projects, covering every angle of communication with the AI.

  • Character: who the AI is (tone, background, expertise)
  • Request: what it has to do
  • Examples: reference patterns (Few-Shot)
  • Additions: sources, language, references, constraints
  • Type: output format
  • Extras: negative constraints (what NOT to do)

Negative constraints like "don't mention the competition" or "don't use exaggerated claims" often improve output more than twenty lines of positive instructions.

COAST: Context, Objective, Actions, Scenario, Task

In the alternative variant, COAST stands for "Challenge, Objective, Strategy, Tactics, Success." The Scenario component lets the AI simulate hypothetical situations.

Great for strategic planning and competitive risk analysis (e.g., "what happens if the semiconductor market collapses 30% in Q3?").

Core Techniques

Chain-of-Thought (CoT)

Ask the model to "think step by step" before answering. Explodes reasoning capabilities in mathematical and logical tasks. Built into reasoning models like o-series.

Tree of Thoughts (ToT)

Explores multiple reasoning branches in parallel, evaluates each, and discards unproductive paths. Dramatically improves problem-solving (e.g., Game of 24 success rate from 4% to 74%).

ReAct

Alternates reasoning and acting: think, call external tool (web search, calculator, API), read result, think again, take more action. Powers AI agents like Claude Code and Cursor.

Self-Refine

The model acts as generator, critic, and reviser on the same prompt: produces draft, self-evaluates, rewrites while improving. Average 20% performance improvement across tasks.

From Prompt Engineering to Context Engineering (2026)

In 2026, the focus has shifted from writing static prompts to designing the entire context the AI sees before answering. This includes:

The real secret is iteration: always ask the model "What are the 3 weak points of this response?" and refine based on the feedback.