You open Twitter. Someone vibecoded a startup in 45 minutes using an agentic MCP with claude.md and a clawdbot running on a Mac Mini. The replies are full of people asking about Obsidian and remotion and agent SDKs and Codex. You close the tab. You open it again. You assume a folding chair position.

This post is for that person. Every term in the current AI developer conversation — what it actually is, whether it matters, and whether you need to care right now.


The Big Picture First

Before individual terms, here is the one mental model that connects everything:

Old world:
  Human → types question → AI answers → Human reads → done

New world:
  Human → states a goal
    → AI plans steps
    → AI uses tools (MCP connects these)
    → AI takes actions (agentic)
    → AI checks results
    → AI loops until done
    → Human reviews output

The tools AI can use — your database, your files, your APIs, your calendar — are connected via MCP. The AI systems that take multi-step actions are called agentic. The frameworks that help you build those systems are agent SDKs. The files that give AI persistent context about your project are claude.md and skills. Everything else in the meme is either a specific product implementing these ideas, or noise.

Now the individual terms.


Vibecoding

What it is: Writing software by describing what you want to an AI in natural language and accepting what it produces with minimal line-by-line review. You direct, the AI writes, you iterate conversationally.

What's actually true: Works well for contained problems — scripts, simple web apps, data pipelines, prototypes. For experienced developers it handles the mechanical parts fast. For beginners it enables building things that were previously out of reach.

What's overhyped: Production systems built entirely on vibes without understanding what the code does. The AI produces plausible-looking code with subtle bugs and edge cases. Vibecoding without understanding is technical debt at AI speed.

Verdict: Real workflow shift. Use it as a tool, not a replacement for judgment.

🔗 Andrej Karpathy's original tweet coining the term


Agentic AI

What it is: AI that doesn't just answer a question but takes a sequence of actions to complete a goal. It uses tools, makes decisions, runs loops, and operates with minimal human intervention per step.

Regular AI:    You ask → AI answers → done

Agentic AI:    You give a goal
                 → AI decides what to do first
                 → uses a tool (search, write file, run code)
                 → looks at the result
                 → decides next step
                 → repeats until goal is reached

Real example: "Research our top 5 competitors and put pricing in a spreadsheet." An agentic system searches the web, reads pages, extracts data, formats it, creates the file. You stated the goal once.

What's overhyped: "My chatbot is agentic" when it just calls one API. Agentic implies multi-step autonomous reasoning, not a single function call.

Verdict: Genuine architectural shift. Worth understanding even if you're not building agents yet.

🔗 Anthropic's guide to agentic patterns
🔗 What are AI agents — IBM


MCP — Model Context Protocol

What it is: An open standard from Anthropic that defines how AI models connect to external tools and data sources. One protocol, any AI, any tool that implements it.

The analogy: USB. Before USB every device had its own connector. MCP is trying to be USB for AI tools.

Without MCP:
  Claude → Google Drive  (custom integration)
  Claude → your database (different custom integration)
  Claude → Slack         (yet another custom integration)

With MCP:
  All three implement MCP server
  Claude connects via one standard protocol

Real example: Connect your AI assistant to your database's MCP server. It can now query your actual schema and write accurate SQL instead of guessing at table names.

What's actually true: Ecosystem is growing fast. GitHub, Slack, Linear, Notion, and hundreds of others have MCP servers. Becoming the standard.

Verdict: Learn this. It's the connective tissue of the agentic world.

🔗 Official MCP documentation
🔗 MCP server directory
🔗 Anthropic MCP announcement


claude.md

What it is: A file you put in your repository that gives Claude persistent instructions about your specific project. Claude Code reads it first and applies those instructions throughout the session.

What goes in it:

# My Project

## Architecture
- Python FastAPI backend, React frontend
- PostgreSQL with Alembic migrations
- All responses use our custom ResponseWrapper class

## Conventions
- Never use print() — use the logger module
- All new endpoints need integration tests in /tests/api/

## Do not touch
- /legacy/ — leave as is
- auth module — ask before changing anything

Why it matters: Without it, every session starts from zero. With it, Claude knows your project immediately — conventions, architecture decisions, off-limits areas — without you re-explaining every time.

Verdict: Trivially easy to implement. Immediate improvement. Do it today if you use AI coding tools.

🔗 Claude Code documentation
🔗 How to write effective claude.md files


Agent SDK

What it is: A software development kit providing building blocks for agentic AI apps — tool definitions, conversation loops, state management, multi-agent coordination, error handling — so you don't write all the plumbing from scratch.

Examples: Anthropic Agent SDK, LangChain, LlamaIndex, AutoGen, CrewAI, Haystack.

# Instead of writing all the plumbing yourself:
agent = Agent(
    model="claude-sonnet-4-5",
    tools=[search_tool, file_tool, code_runner],
    instructions="You are a DevOps assistant..."
)

result = agent.run("Set up monitoring for our Kubernetes cluster")
# Agent plans steps, uses tools, loops, returns result

What's actually true: SDKs vary enormously in quality. Simple agentic tasks often don't need a framework — a loop and some tool definitions is enough. Reach for a framework when coordination complexity actually demands it.

Verdict: Learn when you need it. Don't import a framework for a problem a for-loop solves.

🔗 Anthropic Agent SDK
🔗 OpenAI Agents SDK
🔗 LangGraph (LangChain's agent framework)


Codex (OpenAI)

What it is: OpenAI's cloud-based coding agent. You give it a task, it spins up an isolated environment, writes code, runs tests, opens a pull request. Works asynchronously — submit a task and come back to a result.

How it differs from Copilot: Copilot autocompletes as you type. Codex takes a whole task and completes it end-to-end in the background.

Real use case: "Add input validation to the user registration endpoint and write tests." Codex reads your codebase, writes the validation, writes the tests, runs them, opens a PR. You review.

What's overhyped: "Codex will replace developers." It handles bounded, well-defined, mechanical tasks well. It struggles with tasks requiring deep business context, architectural judgment, or debugging genuinely novel problems.

Verdict: Real and useful for specific workflows. Evaluate based on your actual task types.

🔗 OpenAI Codex


Skills (in AI platforms)

What it is: Pre-built instruction sets that teach an AI how to do a specific thing correctly. Stored as documentation files the AI reads before attempting the task — condensed knowledge from trial and error.

How it works:

Without skill:
  "Create a PowerPoint" → AI guesses library → wrong approach → mediocre output

With skill:
  AI reads SKILL.md first
  → learns exact library and version to use
  → learns best practices from previous attempts
  → learns common mistakes to avoid
  → produces correct output

Broader meaning: Across the AI industry "skills" refers to any specialized capability added to an AI — domain-specific instructions, fine-tuned behaviors, tool definitions scoped to a task.

Verdict: Implementation detail of specific platforms, but the concept matters — giving AI explicit task-specific instructions dramatically improves output quality.


Obsidian (in AI context)

What it is: A local-first markdown note-taking app where everything lives as plain files on your disk. In the AI context, people connect Obsidian to AI agents as a persistent knowledge base — your second brain that your AI can actually read and write to.

Why it's in the conversation: AI agents need persistent memory and structured knowledge somewhere. Plain markdown files are easy for AI tools to read, search, and update.

Real use case: AI assistant reads your Obsidian vault — ongoing projects, meeting notes, personal context — and uses it to give relevant answers without you re-explaining your situation from scratch every time.

Verdict: Useful if you already have an Obsidian workflow. Don't adopt it purely because of AI hype — it has a real learning curve. The concept (AI + persistent personal knowledge) is what matters.

🔗 Obsidian
🔗 Obsidian MCP server


Remotion (in AI context)

What it is: A framework for creating videos programmatically using React. In the AI context, AI generates the Remotion code from a description, and the framework renders the actual video file.

Real use case: "Create a 60-second animated explainer about how ZFS snapshots work." AI generates the React/Remotion components. Framework renders to MP4.

Why it's in the conversation: Generated video is the next frontier after generated text and generated code. Remotion bridges AI output and actual rendered video.

Verdict: Powerful for a specific use case. Irrelevant if you're not generating programmatic video content.

🔗 Remotion
🔗 Remotion + AI guide


Mac Mini (in AI context)

What it is: Popular hardware for running local AI models. Apple Silicon's unified memory means a Mac Mini M4 Pro with 64GB RAM can run large local models at reasonable speed without expensive GPU hardware.

Who should care: Developers wanting local AI for privacy or cost reasons at high volume. Most people are better served by cloud APIs.

🔗 Ollama — run local models
🔗 LM Studio — local model UI


Ralph

What it is: An Anthropic internal codename. Not a public product. Nothing to act on yet.


Clawdbot

What it is: Community slang for Claude-based bots. Not an official product. Can safely ignore.


Terms Not in the Meme Worth Knowing

These didn't make the meme but are equally or more important in the same conversations:

RAG — Retrieval Augmented Generation

Giving an AI access to a knowledge base at query time instead of retraining it. Your AI searches a vector database of your documents, retrieves relevant chunks, and uses them to answer accurately. The foundation of most "chat with your docs" products.

🔗 What is RAG — Cloudflare
🔗 Building RAG with LlamaIndex

Function Calling / Tool Use

The mechanism by which AI models trigger external functions during a conversation. The AI decides "I need to call the search function" — your code runs the function — the result comes back to the AI — it continues. The technical foundation that makes agentic behavior possible.

🔗 Anthropic tool use docs
🔗 OpenAI function calling

System Prompt

Instructions given to an AI model before the user's message that shape its entire behavior — persona, constraints, capabilities, format. Every AI product you use has one. Writing good system prompts is a genuine skill.

🔗 Anthropic prompt engineering guide

Vector Database

A database that stores information as mathematical embeddings rather than text, enabling semantic search — "find me things similar to this concept" rather than exact keyword matching. The storage backend behind most RAG systems.

Popular options: Pinecone, Weaviate, Qdrant, ChromaDB, pgvector (PostgreSQL extension)

🔗 Vector databases explained

Fine-tuning

Taking a pre-trained model and continuing its training on your specific data to change its behavior permanently. More expensive and complex than prompting. Rarely the right answer — most use cases are solved by better prompting or RAG first.

🔗 When to fine-tune vs prompt — Anthropic

Context Window

How much text an AI model can "see" at once — the conversation history, system prompt, documents, tool results, everything. Larger context windows mean AI can work with more information simultaneously. GPT-4 has 128K tokens. Claude has up to 200K. Gemini goes to 1M+.

Prompt Caching

A technique where repeated parts of a prompt (like a large system prompt or document) are cached by the API provider so you don't pay full cost to process them on every request. Important for cost optimization in production systems.

🔗 Anthropic prompt caching

LLM Ops / AI Ops

The practice of running, monitoring, and maintaining AI systems in production. Logging prompts and responses, tracking costs, catching regressions, A/B testing prompts, managing model versions. The DevOps of AI.

Tools: LangSmith, Langfuse, Helicone, Weights & Biases

🔗 Langfuse — open source LLM ops


The Priority Stack — What to Actually Learn and When

Learn now, immediately useful:

Term Why
MCP Becoming the standard for connecting AI to real systems
Agentic patterns The fundamental shift in how AI is being used
RAG The foundation of most practical AI applications
Tool use / function calling The technical basis for agentic behavior
claude.md / system prompts Immediate quality improvement, trivial to implement

Learn when you're building something that needs it:

Term When
Agent SDKs When your agent has complex multi-step coordination
Vector databases When you're building RAG
Fine-tuning After exhausting prompting and RAG
LLM Ops tools When you have something in production worth monitoring
Remotion When your use case involves programmatic video

Evaluate based on your specific situation:

Term Consideration
Codex Good if you have well-defined ticket backlogs
Mac Mini / local models Good if privacy or volume costs are a real concern
Obsidian integration Good if you already use Obsidian

Can safely ignore:

Term Reason
Ralph Not public
Clawdbot Slang
Vibecoding as philosophy It's just a workflow, not an identity

The Meta-Point

The reason the folding chair feeling happens is that most of this content is written by people who are excited, not people who have used these tools on real problems for more than a week.

The actual ideas worth internalizing:

  1. AI models are becoming agents — they take sequences of actions, not just answer questions
  2. MCP is standardizing how agents connect to tools — one protocol instead of custom integrations everywhere
  3. Context is everything — giving AI persistent, structured knowledge about your project dramatically improves output
  4. RAG before fine-tuning — most "teach the AI about my data" problems are solved with retrieval, not retraining
  5. Local models are becoming genuinely capable — privacy and cost concerns now have real solutions

Everything else in the meme is a specific product implementing one of those five ideas, or noise.

Pick one. Try it. The folding chair is optional.


Compiled by AI. Proofread by caffeine. ☕