Author: Jan De Jager

  • The Rise and Reassessment of MCP: What Comes Next?

    The Rise and Reassessment of MCP: What Comes Next?

    From tool catalogs to code generation and agentic teams

    In 2024, Anthropic introduced the Model Context Protocol (MCP), a standardized way for language models to describe and call tools using clear, documented interfaces. Before MCP, every project invented its own schema and conventions, which made tools hard to share and reason about.

    SUBSCRIBE WITH THIS OFFER

    See Anthropic’s MCP overview: https://www.anthropic.com/news/model-context-protocol

    A year on, MCP has seen impressive adoption and its share of growing pains. Many teams embraced MCP as a “gold standard” for tool calling -running demos where models read email, query databases, search files, and browse the web using well-described, interoperable tools. But like any early abstraction, MCP has trade-offs.

    Where MCP shines

    • Standardized tool definitions and documentation
    • Better ergonomics for tool builders and client developers
    • Ecosystem momentum and shared mental models

    Where MCP strains in practice

    • Context overhead: Tool descriptions, input/output schemas, and examples all consume tokens. At scale, this adds up. For example, if you ship 50 tools at ~250–400 tokens each, you may spend 12,500–20,000 tokens before you’ve even processed user content.
    • Tool selection ambiguity: Models can still mis-select tools or hallucinate parameters, leading to retries and cost/latency spikes.
    • Operational complexity: Versioning tool definitions, coordinating changes across services, and keeping descriptions in sync with behavior are nontrivial.

    An alternative: Cloudflare’s “Code Mode”

    Cloudflare proposes a different approach they call Code Mode: don’t inject a large catalog of tool definitions into the model. Instead, expose a typed TypeScript API and let the model write the small snippets of code needed to call that API. Execute the code server-side in a sandbox and return structured results.

    Cloudflare’s article: https://blog.cloudflare.com/code-mode/

    Why this resonates:

    • Lower token pressure: short, on-demand code replaces long, ever-present tool descriptions.
    • Mature security primitives: when interactions happen via APIs, we can leverage well-established auth, rate limits, and auditing instead of inventing new patterns inside the prompt.

    Security comes first: in Code Mode, run code in a tight sandbox, enforce allowlists and schema checks, use scoped credentials, set per-tenant quotas and rate limits, and log code and calls for audit.

    Another path: agent-to-agent orchestration

    See Googles A2A Framework: https://a2a-protocol.org/latest/

    There’s also a structural alternative: instead of one “everything model” carrying a huge toolset, use a team of specialised agents plus an orchestrator.

    • Orchestrator LLM: holds minimal context, routes tasks, and composes results
    • Specialist agents: each owns a small, focused toolset (data access, search, email, billing, etc.)
    • Narrow context: pass only the information needed for each task, reducing confusion and token waste

    This approach can reduce errors, improve observability, and make scaling safer (each agent has a smaller blast radius). It also plays nicely with both MCP (for small, stable toolsets) and Code Mode (for dynamic integrations).

    So where does MCP fit now?

    MCP isn’t “Over” It remains a strong choice when you have a small, stable set of predictable tools—think fewer than ten—where portability, clear shared documentation, and consistent low-latency calls matter more than spinning up execution sandboxes.

    Code Mode shines when your integrations are diverse, fast-changing, or vendor-specific; when you need to compose multi-step workflows on the fly; and when token or latency budgets make large tool catalogs impractical. Agentic orchestration is most useful when you want specialization and separation of concerns, stronger observability and safer scaling, and the ability to mix different calling strategies.

    We’re not choosing between them; the real pattern is to merge all three. Keep a slim, high-frequency core in MCP, generate on-demand calls with Code Mode for long-tail or rapidly evolving APIs (executed in a sandbox with mature security), and use an orchestrator LLM to route tasks, pass only the necessary context, and compose results. This hybrid approach reduces errors and costs while preserving flexibility and speed.

    The Takeaway

    The center of gravity is shifting from “ship every tool definition to the model” to “generate code on demand” and “compose teams of specialized agents.” Expect tool calling to become cheaper, faster, and more capable as these patterns mature-and to lean more on classic API security and software engineering practices.

    What will you build with this?

    Anthropic’s MCP overview: https://www.anthropic.com/news/model-context-protocol
    Anthropic on code execution with MCP: https://www.anthropic.com/engineering/code-execution-with-mcp
    Cloudflare Code Mode: https://blog.cloudflare.com/code-mode/
    Cloudflare Agents SDK: https://developers.cloudflare.com/agents/
    Building agents in TypeScript: https://adk.iqai.com/docs/framework/get-started/quickstart
    MCP Authorization: https://modelcontextprotocol.io/docs/tutorials/security/authorization

  • The coming of the Advent of Code

    The coming of the Advent of Code

    This year, let’s turn November into a low-stress, high-fun team challenge. We’ll take the Advent of Code 2023 puzzles and run them together from November 15 through December 10—when we actually have everyone around. Same great puzzles, better timing.

    SUBSCRIBE WITH THIS OFFER

    If you haven’t tried it: Advent of Code is basically an advent calendar for devs. Instead of chocolates, you get one algorithmic puzzle per day from December 1 to 25, each wrapped in a playful storyline. Solve part one, earn a star; then part two tweaks the rules and tests how flexible your solution really is. It’s language-agnostic: use whatever you like.

    We’re going to do the year of our Lord 2023 editions set so we can run it on our schedule, compare approaches, and have a little friendly fun together.

    How we’ll run it

    • Dates: Nov 15 → Dec 10
    • Puzzles: Advent of Code 2023 (all 25 available; You do not have to finish all of them)
    • Check-ins: A quick Friday knowledge-share for highlights on anything new you learned (Even as small as bubble sorting in Python)
    • Leaderboard: private board for bragging rights, inspiration, and gentle chaos
    • Tone: collaborative first, competitive second

    The twist: pick a new language (by default)

    To make it interesting, the default is: pick a language you’ve been curious about and do your AoC in that. Want to learn Rust? Perfect. Curious about Go, Kotlin, or Zig? Go for it. If you’d rather deepen a language you already use, that’s fine too—but the most fun tends to come from building a little toolbox in something new.

    Why this works so well:

    • You’ll quickly spot gaps and habits (parsing assumptions, off-by-one cousins, the “I’ll refactor later” ghost).
    • You’ll see multiple ways to model the same problem—graphs, grids, DP, memoization, pipelines.
    • You’ll collect reusable snippets: input parsing, grid utilities, BFS/DFS templates, small profilers.

    Light guardrails to keep it enjoyable

    • Spoilers: use spoiler tags in the channel until lunchtime. Help > hints > answers.
    • Show-and-tell, not code dumps: on Fridays, share the approach and trade-offs (data structures, complexity, edge cases).
    • Repo of goodies: drop notable solutions, parsing helpers, and brief READMEs so we can reuse patterns.
    • Friendly leaderboard: celebrate speed, but also clarity and “I learned something” moments. Micro-awards welcome:
      • Fastest First Star
      • Cleanest Solution
      • Most Educational Refactor
      • Best Plot Twist in Part Two
    • Pace with kindness: not everyone is a midnight solver. Stars earned after coffee still count as stars.

    What to expect

    • A little skill sharpening every day you play—even if it’s one or two puzzles a week.
    • Cross-pollination of ideas: “I modeled it as a graph and part two became Dijkstra,” versus “I memoized a brute force and it went brrr.”
    • The kind of jokes only we enjoy: CI dressed up as a Christmas tree, a temporary ceasefire in tabs-vs-spaces to fight trailing whitespace, someone claiming an O(1) solution because they “waited for Priya.”

    Getting started

    1. Pick your language (new-to-you by default).
    2. Join the private leaderboard (I’ll share the code in the channel).
    3. Grab Advent of Code 2023 and start wherever you like.
    4. Post progress in scrum notes; on Fridays, bring a highlight or a gotcha to the knowledge-share.

    If you’ve been meaning to learn Rust, Go, or “that one language” you keep bookmarking, this is your excuse. We’ll learn a bunch, borrow clever ideas from each other, and collect a tiny library of utilities that will pay off in real work.

    See you in the channel—bring your language flag and your favorite debugging snack.

  • Meet Biome v2: Our snappy new code gardener

    Meet Biome v2: Our snappy new code gardener

    TL;DR: We’re retiring ESLint and adopting Biome — a fast, Rust-built tool that lint-checks and formats our code in one go, with new smarts in v2 to catch more issues and reduce config fatigue

    SUBSCRIBE WITH THIS OFFER

    What is Biome (and why v2 matters)?

    Biome is an all‑in‑one toolchain for JavaScript/TypeScript that combines a linter, a formatter, and an editor‑friendly language server, designed for speed and simplicity thanks to its Rust core. The new v2 release upgrades the brains: type‑aware rules, multi‑file analysis, and an extensible plugin system to grow features without growing your config headaches.

    Why we’re swapping ESLint for Biome

    • One tool, fewer moving parts: Biome replaces the ESLint + Prettier combo with a unified engine, so less setup, fewer plugins, and fewer mystery conflicts.
    • Faster dev feedback: The Rust implementation makes linting/formatting feel snappy, which means tighter feedback loops and happier laptops.
    • Formatting you already trust: Biome’s formatter targets high compatibility with Prettier (97%), so your code style won’t do a sudden plot twist.
    • Smarter checks in v2: Type‑aware linting and cross‑file analysis help catch real‑world issues that single‑file rules can miss, plus a new plugin system to extend safely over time.

    How to think about it (intern‑friendly edition)

    • ESLint: a great code spell‑checker that often needs many dictionaries and grammar plugins.
    • Biome: a speedy proofreader that also cleans up your writing as you type — and in v2 it learned context, not just spelling, so it catches “their/there/they’re” across chapters, not just sentences.

    What changes for us

    • Fewer configs to maintain, fewer CI steps to juggle, and a single source of truth for style + correctness — with fast, consistent results across the team.
    • Same code style expectations, just enforced and formatted by a unified, Rust‑powered engine that’s kinder to your time and CPU.

    And now, the dramatic ending: We’re going Biome. Fewer plugins, faster feedback, smarter rules — because clean code shouldn’t feel like cardio
    Dive deeper in the official Biome documentation to get started https://biomejs.dev/guides/getting-started/

  • Dungeons & Dragons: The Ultimate Team-Building Quest for Software Developers

    Dungeons & Dragons: The Ultimate Team-Building Quest for Software Developers

    SUBSCRIBE WITH THIS OFFER

    In an era where “synergy” and “collaboration tools” have been cast so often they might as well have cooldown timers, one might wonder — what could possibly resurrect true teamwork among software developers?

    The answer doesn’t come from another productivity suite or stand-up meeting. It comes from rolling dice, defeating goblins, and failing spectacularly at persuasion checks.

    Yes, brave adventurer, the secret spell for world-class software team-building is Dungeons & Dragons (D&D) — a tabletop role-playing game that transforms your team from code-slinging mortals into fellowship-forging heroes.


    The Developer’s Natural Habitat: The Table of Infinite Imagination

    Picture it: your lead backend developer is now a stoic dwarven paladin. The QA engineer? A mischievous wizard armed with Fireball and a healthy disregard for Jenkins downtime.

    In the realm of D&D, hierarchies crumble faster than a brittle stack overflow. The quietest dev in the room suddenly becomes the party’s silver-tongued negotiator. The project manager might be a bard — both inspirational and occasionally surrounded by mysterious magical chaos.

    The result? Developers rediscover what it means to communicate, adapt, and improvise — skills as crucial in slaying dragons as in squashing bugs.


    Debugging Dragons: Parallels Between Coding and Campaigns

    Let’s be honest. Software development is already a kind of D&D campaign:

    • There’s a mysterious client request written in riddle form.
    • A party of devs embarks on a sprint quest with low mana (read: coffee).
    • The final boss? A deployment at 4:59 pm on a Friday.

    But in D&D, every problem is approached through creative collaboration. You can’t brute-force a dragon with if-statements — you debate, experiment, and think sideways.

    That’s exactly what great engineering teams do when faced with complex systems. They blend logic, imagination, and the occasional natural 20.


    Why It Works: Fellowship, Fails, and Fun

    1. Shared Stories Build Shared Trust: When you’ve watched your UX designer heroically fail a stealth check while trying to sneak past goblins, there’s a bond forged that no corporate icebreaker can replicate.
    2. Safe Space for Failing Forward: In D&D, bad rolls lead to great stories. In development, failed tests lead to innovation. Both demand psychological safety — and both reward resilience.
    3. Creative Problem-Solving Under Chaos: Whether navigating fantasy politics or debugging race conditions, the ability to stay calm and collaborate amidst madness is an art. D&D gives teams a magical crash course.
    4. Fun Beats Forced Interaction: No team ever said, “Wow, that trust fall changed my life.” But they have said, “Remember when we polymorphed the boss into a sheep and escaped on flying mugs of ale?”

    From Campaign to Codebase

    After weeks of shared quests, something magnificent happens. Your dev team starts talking differently — more openly, more imaginatively, more… humanly.

    Daily stand-ups become more like war councils. Design discussions turn into creative brainstorms. The dreaded “Difficult Conversation About Technical Debt” becomes a “Dragon We Shall All Slay.”

    In short, D&D doesn’t just bond your team — it levels them up.


    The Call to Adventure

    So next time your team needs a morale boost, skip the bowling alley or the awkward offsite scavenger hunt. Instead, light some candles, roll some dice, and unleash your collective imagination.

    Because whether in code or campaign, true greatness emerges from collaboration, courage, and critical hits.

    And remember, adventurers: when the next merge conflict arises, just ask yourself — what would your party do?


    Tagline:
    🧙‍♂️ Dungeons & Dragons — where every meeting becomes a quest, every teammate a hero, and every bug fix a triumph worth singing about.