The Legacy of Go
This presentation was developed for GoLab in Firenze Italy and delivered as the closing keynote …
How a teenage printer’s writing exercise reverse-engineered the timeless principles of learning.
The core algorithm that powers today’s most advanced artificial intelligence wasn’t born in Silicon Valley or has anything to do with computers. It was engineered in the dusty, ink stained backroom of a Boston print shop, nearly 300 years ago, by a teenager who was tired of being told he wasn’t a good writer.
Long before he was a world famous statesman and inventor, Benjamin Franklin was an ambitious, self taught apprentice with a chip on his shoulder. Working for his older brother, he devoured books and secretly submitted anonymous essays to his brother’s newspaper, desperate to succeed in this world of words. But he had a problem. While his ideas were sharp, his prose was clumsy.
His own father, Josiah, delivered a blunt critique: Ben’s writing lacked “elegance of expression, method and perspicuity.”1 For an aspiring intellectual, this was a devastating diagnosis. It confirmed his fear that he simply didn’t have the innate ‘gift’ for writing.
Instead of giving up, Franklin did something extraordinary. He rejected the idea of ‘innate talent’ and treated his flawed writing not as a personal failure, but as an engineering problem. He asked himself: Is there a system, an algorithm, that can build the skill of writing from the ground up? In effect, he decided to debug his own brain.
What Franklin developed wasn’t just a writing exercise, it was something far more profound. His process reveals something fundamental about how intelligence, artificial or human, actually improves. When we examine his method through a modern lens, we discover he had unknowingly architected the same learning principles that power today’s AI.
Franklin’s method can be seen as a kind of human-powered, conceptual gradient descent, where conscious insight replaces calculus to minimize the ’error’ between his writing and his goal. By deconstructing his 300 year old protocol, we can uncover a powerful blueprint for how anyone can master a complex skill.
Franklin’s breakthrough wasn’t just in learning to write better, it was in systematizing the process of improvement itself. He created a feedback loop that could be applied to any skill, making excellence reproducible rather than accidental.
Franklin’s methodical approach to mastering writing
Franklin chose articles from The Spectator, a respected British periodical, as his “training data.” He then began a four step loop:
First, he converted the source material into a compressed format. In his words, he took several articles and made “short hints of the sentiment in each sentence”2. This forced him to distill the core essence of the writing, much like an AI model extracts key features from raw data.
After a few days, a delay he instinctively knew would prevent simple memorization, he would attempt to reconstruct the original article in full sentences from his “short hints.” This was his prediction, an attempt to generate output from his compressed understanding.
He then executed the most critical step: “I compared my Spectator with the original, discovered some of my faults, and corrected them”2. This direct comparison between his output and the “ground truth” was his loss function, generating a clear error signal that highlighted specific weaknesses.
Franklin leaned into his errors; he used them to update his internal model. By meticulously correcting his faults, he was iteratively adjusting his own mental parameters, his vocabulary, sense of structure, and style, to reduce the errors in the next attempt.
This process exemplifies what psychologist Anders Ericsson, the originator of the concept, called “deliberate practice”3. As Ericsson noted, it’s not about mindlessly repeating a task, but about a “highly structured activity, the explicit goal of which is to improve performance.” Franklin’s method, with its clear goals, feedback mechanism, and relentless focus on correcting errors, is perhaps one of the earliest and purest examples of this principle in action.
Franklin’s protocol is a striking parallel for a modern machine learning training loop:
Benjamin Franklin’s Method | Modern Machine Learning Equivalent |
---|---|
Selects high quality articles from The Spectator | Data Collection: A curated, high quality dataset is assembled for training |
Reconstructs articles from “short hints” | Forward Pass: The model makes a prediction based on an input |
Compares his version to the original to find “faults” | Loss Function: The model’s prediction is compared to the correct output to calculate an error |
Meticulously corrects his prose based on the errors found | Backpropagation & Gradient Descent: The error is propagated backward to adjust the model’s internal weights, minimizing future errors |
The time delay Franklin built into his process is particularly brilliant. It served the same function as regularization techniques in modern AI, preventing him from “overfitting” by simply memorizing the source text and forcing him to generalize the underlying patterns of good writing. He even engaged in what we’d now call data augmentation, shuffling his notes to reconstruct arguments in a new order or converting prose to poetry and back again to develop a more robust and flexible mental model.
Anyone can use the Franklin Protocol to master a skill today, from coding to cooking to playing a musical instrument. The key is to stop passively consuming information and start building your own active learning loop.
The beauty of Franklin’s method lies in its universality. Whether you’re debugging code, perfecting a recipe, or learning a musical piece, the same principles apply: compress the knowledge, reconstruct from memory, compare against excellence, and iterate based on the gaps you discover.
Even when our primary ventures fail, we have to remember that there’s always money in the banana stand. The fundamental practices that seem mundane often contain the greatest value.
Modern applications across domains
Here’s how to build your own:
Every skill has a “ground truth”, a gold standard you can learn from. Franklin had The Spectator. For you, it might be:
The goal is to find an exemplary model to which you can compare your own work.
I used a very similar protocol when I was learning the Go programming language, treating the problem like a Code Kata. While I already programmed in other languages, I thought it was essential for me to master idiomatic Go. After learning the basics, I designed a series of katas where I would take on a small, specific challenge: implement a common function, like one from the strings
package.
strings.Split
or strings.Join
from scratch, based on my current knowledgeThis final step is what separates simple practice from true mastery. After refactoring based on insights, ask the master’s question: “What pattern did I miss that an expert sees instantly?” This meta-analysis, thinking about your thinking, accelerates the internalization of expert mental models.
For me learning Go, the most crucial step was not just seeing the difference, but going back and refactoring my own code based on those insights. Each cycle transformed the exercise from a simple task into a kata, a mindful practice where the goal wasn’t just to get a working solution, but to internalize the patterns of an expert. I even gave a presentation about my mistakes and how I learned from them.
Today modern AI models can facilitate this step much more efficiently, as these models are excellent at identifying and evaluating the differences between two blocks of text and explaining why one is superior.
Franklin’s method worked because he intuitively understood that mastery is not about memorization, but about internalizing a generative model of a domain. He couldn’t store every article from The Spectator, but he could develop a system that recognized its patterns and could reproduce them in novel situations.
This principle extends far beyond Franklin’s era. It is the very essence of modern AI. Large language models don’t “know” anything; they are extraordinarily sophisticated pattern recognition and reproduction engines. They have learned the statistical patterns of language so well that they can generate coherent, novel text45.
The individuals who master any field are those who operate on the same principle. They don’t win because they have consumed the most tutorials, but because they have built the most effective systems for recognizing patterns and improving their own output.
Perhaps Franklin’s greatest insight wasn’t the method itself, but his rejection of the fixed mindset. In an era when talent was considered innate, he proved that expertise could be systematically constructed. This wasn’t just learning; it was learning how to learn — the ultimate meta-skill.
Three hundred years ago, a teenage printer reverse engineered the algorithm of learning with nothing more than paper and ink. While Franklin’s method and machine learning operate in different domains, they share the same core architecture of systematic improvement through error correction.
Today, we have access to endless “Spectators” in every field imaginable. Franklin gave us the blueprint. It’s time we started building ourselves with it.
Theory is one thing; practice is another. If you’re inspired by this method, don’t just admire it, try it.
Pick a Micro Skill: Choose one small, specific skill you want to improve this week (e.g., writing a SQL query, refactoring a function, kneading bread, drawing a face)
Find Your Spectator: Find one single, high quality example of that skill done perfectly
Run One Loop: Spend 30 minutes trying to replicate it from memory. Then, compare your work to the original, and write down just three specific differences
That’s it. By running a single, conscious loop, you’ll have already started practicing like Franklin and learning like a neural network.
Benjamin Franklin, The Autobiography of Benjamin Franklin (1791). Franklin’s own account of his father’s critique of his writing style. ↩︎
Benjamin Franklin, The Autobiography of Benjamin Franklin (1791). Franklin describes his method: “I compared my Spectator with the original, discovered some of my faults, and corrected them.” ↩︎ ↩︎
Anders Ericsson, Peak: Secrets from the New Science of Expertise (2016). The seminal work on deliberate practice and how experts are made. ↩︎
PNAS, “Learning in deep neural networks and brains with similarity” - Scientific paper on pattern recognition in neural networks and biological brains. ↩︎
OpenAI, “Learning complex goals with iterated amplification” - Research on how AI systems learn and improve through iterative processes. ↩︎