The pronouncement that “programming is dead” and natural language is the new programming paradigm has become a familiar refrain in AI circles. But before we accept this proclamation at face value, we should remember who’s making it: those who’ve bet everything on AI and profit from this particular narrative. The reality, as usual, is more nuanced and far more interesting.

The question isn’t whether natural language will replace programming — it’s whether LLMs can actually become the new compiler, finally realizing the dream of fourth and fifth-generation programming languages that crashed spectacularly in the 1960s and 70s. Yes, the 1960s. This isn’t a typo. This dream is older than most programming paradigms we take for granted today.

The 60-Year-Old Dream

The idea of programming in something resembling natural language isn’t new — it’s a dream that dates back to the 1960s and 70s. The Simula language, predecessor to Smalltalk, pursued this vision explicitly. COBOL was designed with the radical idea that business analysts could read documents in plain English and make them executable. Even SQL, that venerable domain-specific language we use daily, represents an early attempt to bridge formal systems and natural expression.

These weren’t fringe experiments. They were serious, well-funded attempts to democratize programming by making it linguistically accessible. The fact that we’re still having this conversation sixty years later tells us something important: the problem is harder than it looks.

The Spectrum: Ambiguity vs. Precision

To understand why, we need to map the landscape between natural language and machine code.

On one end, we have natural language — English, for instance — with its grammar, historical evolution, idiomatic expressions, and deep contextual dependencies. Natural language is wonderfully expressive but dangerously ambiguous. The same sentence can mean different things depending on context, tone, cultural background, or even the relationship between speakers. This ambiguity is a feature for human communication but a fatal flaw for executable systems. You need high context to understand what a sentence actually means, and sometimes even that isn’t enough.

On the other end sits the evolution of programming languages, a fascinating journey toward abstraction. The most primitive languages were focused entirely on machines. Assembly language meant programming with commands the CPU could understand directly — no abstraction, no cognitive shortcuts, just raw manipulation of registers and memory addresses.

Then came the first abstraction layer: mnemonics. Instead of remembering that opcode 0x89 moves data, you could write MOV. Small change, massive cognitive improvement.

Each decade brought more abstractions, each designed to make programming more cognitively accessible to humans rather than machines. We invented procedural languages — FORTRAN for formula translation, ALGOL for algorithmic expression — that let people manipulate variables and create cognitive models of computation. We developed object-oriented languages based on the belief that human brains naturally perceive the world as collections of objects with properties and behaviors.

The progression has always been toward making machines understand humans better, while maintaining the precision that computation requires. Object-oriented programming, in particular, was supposed to be quite close to human language because we believed (and largely still do) that humans naturally think in terms of objects and their interactions.

But we’re still writing in formal languages that require years of training to master. The dream of natural language programming remained elusive.

The Evolution of Domain-Specific Languages

This is where Domain-Specific Languages (DSLs) enter the story. DSLs represent a different approach: instead of creating one universal language that works for all domains, create specialized languages that reflect specific domains’ unique objects, operations, and relationships. You build a language that domain experts can understand abstractly and declaratively, sharing knowledge without drowning in implementation details.

Embedded DSLs: The First Step

The most primitive form is the embedded DSL, where the foundation of your domain-specific language is the host programming language itself. Think of Ruby’s Rake for build automation, or Rails’ ActiveRecord for database queries. These are DSLs, but they’re constrained by their host language’s syntax and semantics.

The problem is obvious: people still need to know Ruby to write meaningful Rake files. The cognitive load hasn’t actually decreased — you’ve just reorganized it. You’re expressing domain concepts, yes, but through the lens of Ruby’s object model, method syntax, and semantics.

This limitation is fundamental. When your DSL is embedded in a conventional programming language — Java, Python, Ruby, whatever — you inherit all that language’s cognitive baggage. Its syntax, its type system, its evaluation model. You can create useful abstractions, but you can’t escape the host language’s worldview.

Languages With (Almost) No Syntax

This realization led to an important discovery: some languages are better hosts than others. Specifically, languages with minimal syntax and maximal malleability become ideal platforms for DSL construction.

Enter Lisp and Smalltalk, two languages that approached minimal syntax from different angles.

Lisp reduced syntax to its absolute minimum: parentheses and atoms. Everything is either an atom or a list of things in parentheses. That’s it. The entire language syntax fits in a paragraph. This minimalism is deceptive because it enables something profound: code and data have the same structure. A Lisp program is just a list that can be manipulated by other Lisp code. This homoiconicity — the property that code and data share the same representation — means you can write programs that write programs with almost no cognitive friction.

With Lisp macros, you can extend the language itself, adding new control structures and abstractions that look and feel like native language features. You’re not just using Lisp — you’re shaping it. You practically don’t need to know Lisp to use a well-designed Lisp DSL, because the DSL author has already reshaped the language to express domain concepts naturally.

But Lisp has a problem: it looks ugly to most humans. All those parentheses create visual noise that obscures meaning. (Lisp programmers will tell you this is a feature, not a bug, but that’s a separate conversation.)

Smalltalk took a different approach. Instead of minimizing syntax, it unified everything around a single concept: objects and messages. There are no statements, no operators, no special forms — just objects sending messages to other objects. This conceptual minimalism, combined with a live programming environment where you modify running systems, created something remarkable: a language so flexible you could reshape it into almost anything.

In Smalltalk, you don’t have classes as language primitives — classes are objects. Methods are objects. Even the control structures like if/then/else are just messages sent to Boolean objects. This uniformity means you can create domain-specific abstractions that feel completely natural because they use the same message-passing metaphor as everything else in the system.

Both languages share a critical property: powerful metaprogramming capabilities. You can write code that manipulates code, creating new abstractions that feel like native language features. This is the key to embedded DSLs that transcend their host language’s limitations.

The Limit of Embedded DSLs

But even with minimal syntax and powerful metaprogramming, embedded DSLs hit a wall. Some domain concepts simply don’t map cleanly to any existing programming language paradigm. Sometimes you need to break free entirely from the host language’s syntax and semantics.

What if your domain naturally expresses itself in tables? Or diagrams? Or mathematical notation? What if the most natural representation for your domain uses infix operators, significant whitespace, or visual relationships that can’t be encoded in text?

This is where embedded DSLs fail, no matter how minimal their host language’s syntax.

Language-Oriented Programming: The Paradigm Shift

Language-Oriented Programming (LOP) represents a fundamental shift in thinking. Instead of choosing between top-down design (start with requirements, work toward implementation) or bottom-up design (start with primitives, build toward abstractions), LOP takes the middle ground: design the language itself.

What Is Language-Oriented Programming?

LOP is based on a deceptively simple idea: for every problem domain, there exists an ideal language for expressing solutions in that domain. Instead of forcing domain concepts into an existing programming language’s constructs, you create a language where domain concepts are first-class primitives.

This isn’t about creating yet another general-purpose programming language. It’s about creating many small, focused languages, each optimized for a specific problem domain. You might have:

  • A language for describing business rules in insurance

  • A language for specifying communication protocols

  • A language for expressing financial contracts

  • A language for defining UI layouts

  • A language for describing biological systems

Each language has syntax and semantics tailored to its domain. Each language makes the easy things trivial and the complex things expressible.

The Racket Revolution

Racket (formerly PLT Scheme) emerged as the flagship platform for language-oriented programming. While it started as a Scheme implementation (and thus inherited Lisp’s minimalist syntax), it evolved into something much more ambitious: a language for building languages.

Racket’s philosophy is captured in its motto: “Language is the ultimate abstraction.” The platform provides not just a programming language, but a complete toolkit for creating languages:

The Language Tower: In Racket, you specify what language your code is written in using a #lang directive at the top of every file. This isn’t metadata — it fundamentally changes how the file is parsed, compiled, and executed. You can have:

#lang racket      ; Standard Racket#lang typed/racket ; Racket with static types#lang datalog     ; Logic programming in Datalog#lang scribble    ; Document authoring language#lang your-domain ; Your custom language

Each of these is a completely different language with its own syntax, semantics, and runtime behavior. But they all interoperate seamlessly because they compile to the same underlying runtime.

Syntax as Data, Data as Syntax: Like Lisp, Racket makes code and data interchangeable. But it goes further with syntax objects — structured representations of code that preserve source location, binding information, and lexical context. This means you can manipulate code while preserving all the information needed for good error messages and debugging.

The Macro System: Racket’s macro system is essentially a compiler API. You’re not just manipulating syntax trees — you’re participating in the compilation process itself. You can:

  • Create new binding forms

  • Implement new control structures

  • Add static analysis and optimization

  • Introduce new type systems

  • Define new evaluation strategies

Language Creation Workflow: Building a language in Racket follows a pattern:

  • Define your language’s syntax (which can be completely arbitrary)

  • Write a parser that transforms your syntax into Racket code

  • Implement runtime support (libraries, primitives, etc.)

  • Add tooling (syntax highlighting, error checking, debugging)

Because Racket provides infrastructure for all of this, you can create production-quality languages with relatively modest effort.

JetBrains MPS: Visual Language-Oriented Programming

While Racket approaches LOP from the textual side, JetBrains MPS (Meta Programming System) took a different path: projectional editing. Instead of manipulating text files with syntax, you manipulate abstract syntax trees directly through a structured editor.

This unlocks possibilities that text-based systems can’t match:

Multiple Notations: The same semantic content can be viewed and edited using different notations — text, tables, diagrams, mathematical notation — whatever fits the domain best. A state machine might be edited as a textual DSL or a visual diagram, with both representations staying in sync because they’re projections of the same underlying model.

Domain-Specific Editors: Each language element can have a custom editor optimized for that specific construct. Financial formulas might use mathematical notation. Business rules might use decision tables. Configuration might use forms.

Freedom from Syntax: Because you’re not constrained by what can be parsed from text, you can create languages with notations that would be impossible or ambiguous in text form. You can mix diagrams with text, use rich formatting, embed interactive widgets — whatever helps domain experts express their intent.

Why Language-Oriented Programming Matters

LOP represents the bleeding edge of making formal systems accessible to human cognition. Instead of forcing people to think like programmers, you create languages where programming thinks like people in specific domains.

Consider a language for insurance business rules. In a traditional programming language, you might write:

if customer.age >= 65 and customer.smoker == False:    premium = base_premium * 0.85elif customer.age >= 65 and customer.smoker == True:    premium = base_premium * 1.35

In a domain-specific language designed for insurance rules:

For customers aged 65 or older:  Non-smokers receive a 15% discount  Smokers pay a 35% surcharge

This isn’t just syntactic sugar — it’s a different semantic model. The DSL version is expressing business rules in business terms. It’s verifiable by domain experts who don’t program. It can be automatically checked for completeness (did you handle all age ranges?) and consistency (are there contradictory rules?).

More importantly, the DSL can be formally analyzed in ways that general-purpose code can’t. You can prove properties about your business rules, generate test cases automatically, detect redundant or conflicting rules, and even generate code for multiple platforms from the same specification.

The Gap That Remains

Yet even with language-oriented programming and tools like Racket and MPS, we haven’t achieved natural language programming. We’ve created formal systems that are closer to human cognition and closer to domain expertise, but they still require language designers and runtime engine developers.

Someone still needs to:

  • Design the language syntax and semantics

  • Build the parser/compiler

  • Implement the runtime

  • Create the tooling

  • Maintain the language infrastructure

This is significant work requiring deep technical expertise. We’ve moved the abstraction boundary, but we haven’t eliminated it. The domain expert still can’t just describe what they want in plain English and have it become executable.

The languages we create through LOP are more natural than general-purpose programming languages, but they’re still not natural language. They’re constrained, formal systems that happen to be optimized for specific domains.

Enter Constrained Natural Language

This is where Constrained Natural Language (CNL) becomes fascinating. Widely used in finance, banking, insurance, and business rules systems, CNL takes a different approach to bridging the gap.

A CNL starts with natural language — usually English — and imposes structure:

Fixed Dictionary: Only certain words are allowed, each with precisely defined meanings. “Customer,” “premium,” “coverage,” “claim” — these aren’t just words, they’re references to specific domain entities with formal definitions.

Fixed Grammar: Not all grammatically correct English sentences are valid CNL sentences. The grammar is restricted to patterns that can be unambiguously interpreted. No metaphors, no idioms, no ambiguous references.

Eliminated Ambiguity: Every valid sentence has exactly one interpretation. Context doesn’t change meaning — the meaning is encoded in the sentence structure itself.

The result is something that reads like English but behaves like a formal specification. Domain experts can read and write it without feeling like they’re programming. But it’s precise enough to be automatically transformed into executable code.

Here’s the uncomfortable truth: when you design a CNL, you’re essentially reinventing programming languages. You’ve just approached from the opposite direction — starting with natural language and constraining it toward formalism, rather than starting with formalism and abstracting it toward naturalness.

The Complete Landscape

We can now map the full spectrum:

Natural Language → Constrained Natural Language → Domain-Specific Languages (LOP) → General-Purpose Languages → Assembly/Machine Code

From left to right: increasing precision, decreasing ambiguity, narrowing expressiveness within broader applicability.

From right to left: increasing abstraction, better cognitive fit for humans, broader expressiveness within narrower applicability.

CNL and LOP occupy adjacent positions in this spectrum. They’re both formal systems optimized for specific domains. The difference is their starting point and target audience:

  • CNL starts with natural language, constrains it toward formalism, targets domain experts who aren’t programmers

  • LOP starts with programming language theory, shapes it toward domain concepts, targets programmers who become language designers

The Agentic Coding Sweet Spot

This brings us to where AI coding agents can genuinely transform software development — not by replacing programming languages with pure natural language, but by serving as translators between constrained natural language and domain-specific languages.

The landscape looks like this:

Natural Language → LLM Translation → Constrained Natural Language → AI Agent Translation → Language-Oriented DSLs → Compilation → Machine Code

AI agents excel at the middle transition: taking CNL inputs and generating precise DSL code. This is fundamentally different from translating arbitrary natural language into Python, which inevitably produces hallucinations and errors. Instead, we’re constraining both ends of the translation:

  • Input: Structured natural language that domain experts and product managers can read and write

  • Output: Well-defined domain-specific languages with clear semantics and constraints

  • Translation: Not creative code generation, but constrained transformation between formal systems

The humans remain in the loop, but in evolved roles:

  • Language designers create the DSLs using LOP tools like Racket or MPS

  • Domain experts design the CNL vocabulary and grammar

  • AI engineers train and tune the translation agents

  • Domain specialists write specifications in CNL

  • AI agents translate CNL to DSL code

  • Software engineers verify translations and handle edge cases

  • Compilers transform DSL to executable code

No one is removed from the loop — responsibilities are redistributed toward comparative advantages. Humans do what humans do best (understand domain nuance, design abstractions, make value judgments), and AI does what AI does best (pattern matching, constrained transformation, mechanical translation).

Why This Matters in 2026

This hybrid approach represents a pragmatic path forward that keeps human cognitive models as constraints while leveraging AI’s pattern-matching capabilities. We’re not asking LLMs to understand the infinite complexity of natural language or to generate arbitrary code. We’re asking them to perform constrained transformations between well-defined formal systems that happen to be optimized for different audiences.

Language-oriented design will become increasingly prevalent — it’s an unavoidable evolution. Sixty years of programming language development have been moving steadily in this direction. The question is whether we’ll pursue it purely through traditional compiler design or embrace AI agents as translation layers.

The latter path could finally realize COBOL’s original vision: business-oriented development where domain experts directly express requirements in readable form. But instead of compiling directly to machine code (which was never realistic), we compile through multiple levels of increasingly formal abstractions:

Business Stakeholder (Natural language) →
Product Manager (Constrained natural language) →
AI Agent (Domain-specific language) →
Compiler (Machine code)

Each level adds precision. Each transformation is verifiable. Each representation serves a different audience.

The Alternative Future

There’s another path, of course — one where we abandon human-centric languages entirely and let AI models develop their own ontologies and communication protocols. Agents could coordinate using representations optimized for machine reasoning rather than human comprehension. They could develop internal languages that are more efficient, more precise, and more powerful than anything humans could design or understand.

This vision has appeal: why constrain AI to human-readable representations if humans aren’t writing the code? Let the machines optimize for machine understanding.

But this feels premature, perhaps dangerously so. We’re not ready to remove humans from the loop of understanding what systems actually do. Software doesn’t exist in isolation — it operates in human contexts, serves human purposes, and has human consequences. The ability to understand and verify what our systems do isn’t just philosophically desirable; it’s practically necessary.

Maybe in some distant future, we’ll trust AI systems to build other AI systems with no human oversight. But in 2026, we need transparency, auditability, and human judgment in the loop.

The Pragmatic Path Forward

For now, the winning combination appears to be:

  • Domain-specific languages designed using language-oriented programming principles (Racket, MPS, specialized compilers)

  • Constrained natural languages for business stakeholders and domain experts who specify requirements

  • AI agents serving as sophisticated translators between these layers, handling the mechanical transformation work

  • Human experts designing languages, tuning agents, verifying translations, and maintaining semantic integrity across the stack

This isn’t the sexy vision of “just talk to your computer and it builds software.” It’s something better: a thoughtful synthesis of sixty years of programming language evolution with modern AI capabilities, creating a development paradigm that respects both human cognition and machine precision.

The journey from embedded DSLs to Lisp and Smalltalk, from there to language-oriented programming with Racket and MPS, and finally to AI-mediated translation between CNL and DSL represents a continuous refinement of how we bridge human intent and machine execution.

We haven’t eliminated programming — we’ve distributed it across multiple layers, each optimized for different participants in the development process. The business analyst specifies rules in constrained English. The AI agent translates to a business rules DSL. The DSL compiler generates optimized code. The runtime executes it efficiently.

Everyone stays in their domain of expertise. Everyone works at their appropriate level of abstraction. The system remains transparent and verifiable at every level.

Conclusion: Vibe Coding’s Real Promise

The dream of natural language programming isn’t dead — it’s being reborn as something more sophisticated, more practical, and more aligned with how we actually build complex systems. Vibe coding isn’t about throwing away formal methods; it’s about making them accessible to those who understand domains without requiring them to become language lawyers or compiler engineers.

Language-oriented programming provides the tools to create formal languages optimized for specific domains. Constrained natural language provides human-accessible interfaces to those formal languages. AI agents provide the translation layer that was missing for sixty years.

Together, they might finally make COBOL’s vision real: business-oriented development where domain knowledge directly drives system behavior. Not through the naive dream that natural language is already executable, but through the sophisticated reality that we can build formal systems at every level of abstraction and use AI to mediate between them.

Welcome to the future of agentic development, where the programming language is neither purely natural nor purely formal, but something carefully designed to bridge both worlds — with AI agents serving not as replacements for human expertise, but as amplifiers of it.

The fourth generation of programming languages failed because they tried to skip too many steps. We’re not making that mistake again. We’re building the infrastructure — the DSLs, the CNLs, the translation layers — methodically, pragmatically, and with humans firmly in the loop.

It’s 2026, and maybe, just maybe, we’re finally ready to make this dream real.

The most insightful stories about Vibe Coding - Medium
Read stories about Vibe Coding on Medium. Discover smart, unique perspectives on Vibe Coding and the topics that matter most to you like AI, Software Development, Coding, Artificial Intelligence, Programming, Software Engineering, Generative Ai Tools, Cursor, Technology, and more.
https://medium.com/tag/vibe-coding?source=post_page-----92722395cb67---------------------------------------