Why Engineers Can't Be Rational About Programming Languages
The neuroscience of why we make million dollar decisions based on identity, not data.
Language choice is among the most expensive strategic decisions a technical leader will make, and many CTOs choose to delegate it.
Every technical leader knows the pain: your team is debating languages, everyone has data, much of it conflicts, and somehow a decision that will determine 40-60% of your development costs is being made based on whoever argues most passionately.
There’s a reason these conversations feel broken. We’re having the wrong conversation entirely. We debate technical features when we should be evaluating economic impact. We argue about what languages can do when we should be measuring what they will cost.
In my roles at MongoDB, Docker, and Google, I’ve watched hundreds of companies treat programming language choice as a purely technical decision, often made by the engineering team, sometimes inherited from history. Given everything else a CTO has to worry about, it often seems like a decision best delegated to the team.
That made sense when language choice was mostly about whether a language could do the job. But today languages have matured to the point where many languages could accomplish most tasks, the question isn’t “could” but “is the right choice considering all the economic factors”.
The choice of language determines how expensive the job will be, how long it will take, and how reliable the result will be. Language choice has become a deeply strategic decision, one that requires moving the conversation from preference to performance, from opinion to economics.
We need a framework that makes invisible costs visible and ensures we’re evaluating what actually determines success: not which language your team prefers, but which language your business can afford. Part 1 of this series unpacked the emotional and identity drivers that keep teams stuck in the old debate; this post unveils the economic counterweight.
I started developing this framework during my time as product lead of Google’s Languages, Optimizations, and Libraries team, then refined it over several years working with dozens of companies facing exactly this decision. It evaluates the total lifecycle cost of a programming language across nine critical dimensions, organized into three domains:
This framework won’t prescribe a single right answer, your context determines that. But it will ensure you’re evaluating the right variables across your project’s full lifecycle, not just the features that make for passionate debates.
The investment in developer time to build and ship the first version, plus all subsequent features. This encompasses writing new code, understanding existing code, and modifying existing code. The vast majority of authoring happens within an existing system, not on greenfield projects.
Key considerations:
Initial Velocity vs. Sustained Velocity: Languages with simpler syntax, rich standard libraries, and features like automatic memory management often allow developers to write code faster initially. Python is known for rapid prototyping, but that velocity often plummets as project complexity grows. In contrast, languages like C++ or Rust demand more upfront consideration of memory management and data structures, which slows initial authoring but pays dividends in performance and safety later. Go strikes a balance of fast initial authoring with sustained velocity.
Readability and Cognitive Load: How easily can developers understand existing code? Languages with simple, consistent syntax and “one obvious way” to accomplish tasks reduce the mental overhead of comprehension. Go’s deliberate simplicity and lack of magic makes codebases readable even years later. Languages with heavy metaprogramming, implicit behaviors, or multiple competing patterns increase the time spent understanding code before you can modify it.
Refactoring Safety: How confidently can you modify existing code? Static typing provides a safety net for changes. Dynamic languages can make small changes quick but at the expense of increased risk. Quality IDE tooling with reliable refactoring support dramatically reduces the cost of evolving a codebase.
Ecosystem Maturity: Are there excellent, maintained libraries for your needs? Or is the ecosystem a confusing thicket of duplicative or outdated options (a common complaint in the JavaScript and Java worlds)? Poor library support forces your team to build from scratch, dramatically increasing authoring cost. Even worse is choosing the wrong library due to a confusing ecosystem and being forced into costly migrations down the road.
How well a language enforces structure and prevents chaos as both the codebase and the team expand. A language that is delightful for a single developer can become a “big ball of spaghetti” for a team of one hundred working on millions of lines of code.
Critical capabilities:
Module Systems and Interface Definitions: Strong module boundaries and clear interface contracts prevent different parts of the codebase from becoming entangled. Languages with weak encapsulation allow dependencies to creep in unchecked, making large codebases nearly impossible to reason about.
Concurrent Development Support: Can multiple developers work on different parts of the system without constantly stepping on each other’s toes? This requires good package management, clear dependency boundaries, and minimal global state.
Documentation and Knowledge Transfer: Built in documentation generators (like Go’s godoc or Java’s Javadoc) and conventions for self documenting code are essential as teams scale. Without these, knowledge lives only in developers’ heads, creating bottlenecks and risk.
Complexity Management: Does the language provide features that help developers work together and maintain understanding, or does it enable incomprehensible complexity? Excessive abstraction, metaprogramming, and “clever” features can make code impossible to understand at scale.
Dependency management at scale: How does the language handle diamond dependencies, version conflicts, and transitive dependencies when you have hundreds of libraries? This is related to ecosystem (in Authoring) but specifically about managing complexity as dependencies proliferate.
Languages like Go, Java, and C# were explicitly designed with large scale enterprise systems in mind, prioritizing clarity and structure over individual developer expressiveness and flexibility, which create exponential complexity costs at scale.
How fast can you fill an open headcount and get that person productively shipping code?
Key factors:
Talent Pool Size: Languages with massive, vibrant communities like JavaScript, Python, or Go have vast talent pools, making it easier and faster to hire. Popular languages let you fill positions quickly at competitive salaries; niche languages force you to pay premiums for scarce talent or invest heavily in training developers from other backgrounds.
Learning Curve: How long before a new hire becomes productive? Some languages can be learned in weeks (Go, Python); others require months of investment (C++, Rust). A steep learning curve multiplies your onboarding cost with every new hire, extending the time before they can ship features independently. This includes the ramp up time on a project in the given language. The more complicated, customized, magic, monolithic a codebase is the longer it takes to be productive even if you already know the language.
Community Resources: A vibrant community produces more tutorials, better documentation, Stack Overflow answers, and a wealth of third party libraries and frameworks. These resources accelerate onboarding by helping new developers solve problems quickly rather than struggling in isolation or reinventing solutions.
Choosing a niche or obscure language might offer technical advantages, but you’ll pay the price in slower hiring, longer ramp up times, and building more from scratch.
Plan for it: Track time to first meaningfull PR for new hires across stacks.
The largest hidden tax on any software project, often dwarfing initial development expense. This encompasses the relentless, long term effort required to fix bugs, refactor code, and adapt the system as it evolves.
Critical factors include:
Plan for it: Budget time for test coverage, runtime diagnostics, and upgrade windows.
The direct, operational cost of executing your code: your cloud provider bill for CPU, memory, and network I/O. This cost is driven by the language’s performance, efficiency, and runtime characteristics.
Key factors include:
Performance & Efficiency: Compiled languages like C++, Rust, and Go typically execute faster and consume fewer resources than interpreted ones. For a system processing millions of requests, a 10% efficiency improvement translates to significant savings.
Serverless Suitability: Languages with minimal runtimes and fast cold starts (Go, Rust) excel in serverless environments, while JVM based languages may suffer multisecond startup penalties that impact both cost and user experience.
Hardware Needs: For specialized workloads like machine learning, this cost includes expensive GPU/TPU resources. The Python code itself may be minimal, but the underlying compute cost can dwarf all others.
Plan for it: Load test early and benchmark cost per transaction, not just latency.
The operational price you pay to move code from a developer’s machine to production. This isn’t the cost of running the code, but the cost of the build, test, and release process itself. Paid with every commit, a slow or fragile deployment process acts as a significant tax on your team’s agility and ability to ship features quickly.
Key cost drivers include:
Build/CI Speed: Languages with slow incremental builds (Rust, C++) or test suites (Java) impose direct dollar costs in CI/CD minutes on platforms like GitHub Actions, potentially reaching thousands of dollars monthly for large projects.
Artifact Complexity: A language like Go, which compiles quickly to a single binary, offers a simple and fast deployment story. In contrast, Python or TypeScript require packaging the interpreter and numerous dependencies into larger container images, increasing pipeline execution time, storage costs, and complexity.
Plan for it: Track pipeline minutes and artifact size as first class cost metrics.
A modern and increasingly vital factor measuring the effectiveness of AI powered coding assistants like GitHub Copilot and Gemini for Code. These tools are trained on vast amounts of public code, so their quality varies dramatically by language.
Key factors affecting AI assistance quality:
Open Source Footprint: Languages with extensive public codebases provide complete training data, resulting in better, more accurate suggestions.
API Consistency: Languages with stable, consistent APIs and “one obvious way” to accomplish tasks generate reliable suggestions. Languages with multiple competing patterns or ways to do the same thing confuse LLMs, resulting in buggy and insecure code.
Stability and Churn: Frequent breaking changes, evolving idioms, and version fragmentation reduce AI effectiveness. When patterns change rapidly, the AI suggests outdated approaches that no longer work, creating dependency conflicts and compatibility issues.
Readability and Cognitive Load: The factors that make code easier for humans to understand (covered in Authoring Cost) matter doubly for LLMs. Simple, explicit syntax with minimal “magic” helps AI assistants generate correct code. Heavy metaprogramming, implicit behaviors, and complex abstractions confuse AI models just as they confuse human developers.
Context Window Limitations: AI models can only see a limited amount of surrounding code at once. This makes the Project Scale factors: clear module boundaries, self documenting code, and consistent patterns critically important. Languages that enable sprawling, interconnected codebases make it nearly impossible for AI to understand enough context to provide useful suggestions.
Effective AI assistance acts as a productivity multiplier accelerating initial authoring, suggesting bug fixes, and even helping with documentation. Poor AI support puts your team at a competitive disadvantage, essentially forcing them to work without a powerful modern tool that their competitors are leveraging.
The cost of making the language work with your existing systems. Few large scale projects are built in a vacuum, they almost always need to communicate with legacy services, databases, cloud platforms, third party APIs, and systems written in other languages.
Critical capabilities:
Foreign Function Interface (FFI): How easily can you call C/C++ libraries? Many critical systems, databases, and performance sensitive components are written in C or C++. A language with a clean, well documented FFI (like Go’s cgo or Rust’s FFI) enables you to leverage existing code. A poor or cumbersome FFI forces complete rewrites of battle tested libraries.
Data Exchange Formats: Does the language have mature, well maintained libraries for common data formats like Protocol Buffers, Avro, JSON, MessagePack, or Thrift? Can it efficiently serialize and deserialize data for communication with other services? Poor support means writing custom serialization code or accepting significant performance penalties.
Ecosystem Integration: Are there quality libraries for the databases you use (PostgreSQL, MySQL, MongoDB, Redis)? For the message queues you rely on (Kafka, RabbitMQ, AWS SQS)? For the cloud platforms you deploy to (AWS, GCP, Azure)? Immature or abandoned libraries force you to maintain forks or build from scratch, dramatically increasing cost.
Poor interoperability creates an isolated silo, forcing costly rewrites of working systems or the creation of complex, fragile bridge services that add latency, failure modes, and operational overhead.
The cost of preventing, detecting, and remediating security vulnerabilities and the potential damage to your organization from a breach. A language ecosystem’s design can be either a safety net or a minefield.
Critical security factors:
Memory Safety: Languages that offer memory safety by default (Rust, Go, Java, C#) eliminate entire classes of critical vulnerabilities like buffer overflows and use after frees that plague C and C++. This is a huge advantage! Memory safety issues account for roughly 70% of severe security vulnerabilities in systems software.
Package Manager & Supply Chain Risk: This is massive and often overlooked. Modern applications depend on hundreds of open source packages. The security posture of the ecosystem directly impacts your vulnerability surface. npm has a history of malicious packages; PyPI faces typosquatting campaigns. More curated ecosystems like Go modules or Rust’s crates.io provide better security guarantees. This includes the continuous cost of auditing dependencies, running security scans, and responding to advisories. This is work that never ends and comes with catastrophic consequences.
Integrated Tooling: Built in or ecosystem supported tooling dramatically reduces the cost of securing software. Go ships with integrated fuzzing, go vet, govulncheck (tied directly to Go’s vulnerability database and module proxy), and other security tools. In contrast, npm’s ecosystem is vast but filled with deprecated, insecure, and abandoned packages. npm audit exists, but false positives and noise overwhelm developers, and tooling consistency is poor across the ecosystem.
Dependency on C Libraries: Languages that depend on C libraries (such as OpenSSL) inherit their vulnerabilities. The Heartbleed bug in OpenSSL comprimised an estimated 17% of all secure web servers worldwide, including major services from Yahoo to the Canadian Revenue Agency. Any application using the vulnerable OpenSSL versions: including web servers like nginx and Apache, and applications written in Python, Ruby, PHP, Node.js, Java, Rust, and others, were vulnerable regardless of the programming language’s own memory safety guarantees.
Go was uniquely unaffected because it ships its own native TLS implementation rather than depending on OpenSSL. After Heartbleed, the Rust community developed rustls as a memory-safe alternative, though many Rust applications still use OpenSSL through wrapper libraries. This demonstrates how dependency on C libraries creates hidden systemic risk across entire language ecosystems.
A language with memory safety, a secure standard library, strong ecosystem tooling, and a community that prioritizes security can dramatically reduce both your ongoing security costs and the potential damage from breaches.
Did I miss something or get it wrong? Join the conversation on X, Bluesky, or LinkedIn.
This framework is a living document. As languages evolve and new considerations emerge, the factors may be updated to reflect current realities.
Catch Up or Look Ahead
Missed Part 1? Why Engineers Can’t Be Rational About Programming Languages explores the invisible decision drivers that make this framework necessary.
Part 3, where we apply the nine factors to real languages, is coming soon.
