The Translator Is Dead

4 min read

At MLDS 2025, I stood in front of a few thousand engineers and said: the job you were hired for 15 years ago doesn't exist anymore.

Here's what that job was. You took a human requirement — "I need a login page with Google OAuth" — and translated it into code. For over a decade, that translation was the value. English to JavaScript. Business logic to syntax. That was the gig.

In 2025, AI does that translation faster than you. It doesn't need coffee breaks. It doesn't get bored writing boilerplate. If your only skill is knowing how to write code, you have a problem. Not a future problem. A right-now problem.


From Snippets to Agents

We've moved past copy-pasting ChatGPT outputs. Past tab completions. We're in the agentic era — AI that reasons through multi-step workflows, plans execution order, and ships entire features autonomously.

I use Claude Code, Cursor, Codex daily. My GitHub has never been greener. And most of that green isn't code I typed.

This changes what the job actually is. The unit of work isn't "write this function." It's "orchestrate these agents to build this system." Different skill entirely.


The Vibe Coding Trap

There's a pattern I keep seeing. Someone prompts an AI agent. The output looks functional. It runs. They ship it. They "vibed" with it.

This works until it doesn't. And when it doesn't — when that system breaks in production, when users report bugs across three services, when you need to trace a state management issue through frontend and backend — the vibe coder is stuck. They can't debug what they never understood.

If you don't have a mental model of how the database talks to the cache, how the frontend handles state, how the auth layer interacts with the API — you don't own the product. You're along for the ride.

I built a custom 5M-parameter language model last year. Agents wrote most of the code. But I read every line, tweaked constantly, broke things deliberately. Because I didn't understand the system yet, and prompting alone couldn't get me there. After that work, I can prompt my way through similar tasks. The understanding came first. The productivity followed.


What Actually Matters Now

Four things.

Deep debugging. When an agent writes 10,000 lines, can you find the one logical flaw that brings it all down? That's the skill that matters. Not writing the 10,000 lines — reading them.

Systems understanding. Architecture. How components interact. Why this database, not that one. Why this queue, not a direct call. AI can't reason through these tradeoffs at the level your system needs. You can. If you've done the work.

Manual mastery. Keep writing code by hand. I picked up Golang last year — books, hello worlds, tab-completion off. Not because I needed to ship Go. Because if you lose the muscle memory of syntax, you lose your ability to spot when the AI is hallucinating. You lose the bullshit detector.

Agent orchestration. Your job isn't writing code anymore. It's conducting. You're managing a fleet of AI agents toward a specific business outcome. That requires knowing what each agent is good at, where they fail, and how to sequence their work. That's engineering. The rest is autocomplete.


Not the End. A Filter.

This isn't the death of software engineering. It's the death of the easy version of it.

The syntax translation work is gone. What's left is the hard stuff — systems thinking, debugging at scale, creative problem solving, architectural judgment. The bar is higher. The job is harder. And honestly, more interesting.

The room at MLDS had people from startups, enterprises, research labs — a cross-section of India's 5M+ developer workforce. Most were hired as translators. The ones who stay relevant are the ones who become architects. Not the title — the mindset. Understanding systems, not just syntax.

I know which side I'm building for.


I wrote more about this after living with AI tools for another year — specifically about whether I still read the code agents write, and why software understanding matters more than coding in this new landscape. And the fundamentals problem hasn't gone away — if anything, it matters more now.