You were told to move fast. You were told the tools had arrived, that the boring parts of your job were over, that anyone not adopting AI into every layer of their workflow would be left behind. That voice was confident, persuasive, and everywhere. It also left out nearly everything that matters.
What got skipped was the part where all of this comes at a cost that nobody selling you a future wanted to name. The cost is not financial. It is cognitive, systemic, and compounding. And the language you need to describe it has existed for over seventy years, gathering dust while the hype cycle burned through your attention span.
The World You Were Handed

In almost every dimension of human life right now, we are being buried in AI slop. Product reviews, ad copy, code reviews, robo-calls, job applications, resumes, movies, art, music, search results, and the entirety of LinkedIn for the past year. Everything you see or read seems like it could be fabricated, but there is not enough time in a lifetime to sort it all out. It is as though the world’s engineers, data centers, power plants, and investors have joined forces around one shared ambition: to absolutely overwhelm you with high-volume, low-quality information.
The person you were listening to before framed this as a temporary mess on the way to something great. A transitional cost. Growing pains. What they never gave you was a framework for understanding why the mess behaves the way it does, why it is getting worse in predictable ways, and what your actual options are. They gave you enthusiasm when you needed vocabulary.
The vocabulary you needed is called cybernetics.
The Problem Nobody Wanted to Name
Here is the example that should have been front and center in every conversation about AI-assisted development: reviewing AI-generated code is nauseating. Producing it is exciting. That asymmetry is the entire problem, and almost no one talking to you about these tools wanted to linger on it.
The first time a code assistant generates a full Object-Relational Mapping for you, it feels magical. Writing those by hand is tedious, uninteresting work. But the first time someone sends you a pull request and asks you to review thousands of lines of AI-generated ORM is a different experience entirely. The mistakes are almost invisible. Variables declared multiple times. Libraries referenced that are typically imported rather than what actually is. Objects that do not exist yet being called with confidence. Logic bugs in unit tests that make them structurally impossible to fail. Sometimes the tool rewrites parts of the standard library or overloads standard methods. These mistakes are sometimes valid code. They compile. They might even run. But they are profoundly illogical and maddening to read.
When humans reach the edge of their knowledge, they degrade in human ways. They ask for help. They bang their head on the desk. They make elementary mistakes that other humans are prepared to recognize.
When LLMs reach that same edge, they degrade in a way that is best described as one percent dissimilar to functioning code. Or perhaps “the most statistically accurate imitation of the right answer.” Think of the AI fingers problem in generated images, except this version pages you at two in the morning. These mistakes are, by their very nature, engineered to be hard to detect because they are so close to working code. Humans are not ready to recognize these failures. They are the code equivalent of an optical illusion.
The person who told you these tools would make your team faster did not mention this part. Or if they did, they waved it away.
The Framework You Should Have Been Given

Variety, in cybernetics, describes the number of potential states a system can take. A traffic light can be red, yellow, green, or unpowered. Four states. That small number makes traffic lights almost effortless to process. They fit in a busy brain nearly unconsciously, and the systems that control them can be extremely simple.
Ross Ashby, an early contributor to cybernetics who focused on self-regulating systems and complexity, coined the Law of Requisite Variety: “The variety in the control system must be equal to or larger than the variety of the perturbations in order to achieve control.” Often paraphrased as “only variety can attenuate variety.”
Shannon Entropy, named after Claude Shannon, is a foundational concept in information theory. It measures the uncertainty or unpredictability of a system, quantifying how much information is contained in a message or how uncertain we are about the outcome of a random variable.
Variety and entropy are related. The greater the variety of a system, the greater its entropy. If you have ever scaled software products, you already know this intuitively. Run 50,000 instances of a given architecture and somewhere out there, one of them is broken in a way nobody has ever seen.
In the code review example, the variety your AI coding assistant is capable of exceeds the variety any single reviewer is capable of processing. The model is an amalgam of every coding style it was possible to consume. You and I are an amalgam of one lifetime. This is exactly why teams employ coding standards: to reduce the variety of code produced, scoping it to a common set of patterns that align with the variety reviewers can handle.
The entropy in these models is also qualitatively different from human output. A system with a vast number of possible outputs has very high entropy. Balancing that entropy is critical to making these tools useful, but the “safer” a model is from entropy, the more likely it is to give you something bland like “do a loop,” which is not impressive enough to sell. These are frothy times in the AI market. Every coding assistant is going to be tuned to the maximum entropy a buyer will tolerate for the foreseeable future.
Nobody who was selling you on the revolution wanted to explain this tradeoff.
What You Can Actually Do About It

Overwhelming a system, or your coworker, with high-variety output is easily confused by both sides as intelligence. And who does not want to use the magic “be smarter” machine? But doing this deliberately is an act of aggression, used to overwhelm. Ask Steve Bannon, for whom flooding regulatory and legislative systems is a primary tactic. Or observe any heated debate, where techniques like the Gish gallop are deployed as a matter of course.
Intelligence and variety might be correlated. Vocabulary, range of experience, these things are associated with intelligence for good reason. But that is not the whole story. Most of us can point to at least one well-spoken fool who made our lives worse. Intention matters.
Good engineers make complex tasks simple, specifically to reduce and attune their variety to the level that the systems involved can accept. They write code for the receiver: the operators, the reviewers, the customers. They write code for their future selves, when they have forgotten every detail and someone brings them a bug months later. This is because they understand that variety is risk.
You have two options for managing that risk.
Increase the Variety of Your Control System
This means employing AI in code reviews, matching the variety of the output with an equally capable regulator. This might work in coding, where the same parties control both sides. But it is not hard to see how ugly this gets in domains where that is not true: job applications, the legal system, cybersecurity. At those edges, you get a disastrous arms race of competing AIs.
Strategically Decrease the Variety of the System Under Control
In humans, this looks like the senior engineer who simplifies aggressively, reducing the number of ways a given thing is accomplished so that the code is maintainable and comprehensible. In AI, this means training your model on your coding standards, constraining it to smaller changes, and supervising it heavily.
What Comes Next
The expert you followed before gave you momentum without direction. Cybernetics is not easy. As an interdisciplinary field, it guarantees you will find yourself outside your comfort zone. But the multifaceted, compounding threats that AI introduces to technical systems make this study worth your time. If you want an entry point, consider Norbert Wiener’s The Human Use of Human Beings or Stafford Beer’s 1973 Massey Lectures, Designing Freedom.
It remains unclear whether these systems will reach parity with humans, let alone exceed them. LLMs have passed the Turing Test, but they managed it by consuming nearly everything humanity has ever produced. We cannot generate another millennium of media in a year. Some believe cost and computation will cap progress here. Others believe AGI is imminent. This moment in history reminds me of the era when we called the internet cyberspace and believed we would all live inside Lawnmower Man-grade virtual reality.
In other words, it is the season of wildly ambitious, possibly stupid ideas. The question is whether you will navigate it with real frameworks or borrowed hype. That choice, at least, is still yours.

