In this post
Artificial intelligence is often described in sweeping terms—machines that think, learn, or even replace us. But beneath the headlines lies a deeper question: what do we actually mean by intelligence? This article explores the foundational definitions of AI, why they differ, and how they shaped the systems we build today.
Why intelligence matters
We call ourselves Homo sapiens—the wise species—because intelligence defines our ability to understand, predict, and shape the world. Artificial intelligence extends that ambition: not just to study intelligence, but to build entities that can act effectively in complex, unfamiliar situations.
AI is already economically significant and scientifically unfinished. Unlike older disciplines where the giants of the past mapped most of the territory, AI remains wide open. The core ideas are still being argued over—and that’s a feature, not a flaw.
Four ways to define AI
Historically, researchers have disagreed on two key axes:
- Should intelligence be measured against humans, or against an abstract standard of correctness?
- Should we focus on internal thinking, or on external behavior?
From these tensions, four classic approaches emerged.
Acting humanly
This view asks whether a machine can behave indistinguishably from a human. The most famous example is the Turing Test, where a system succeeds if a person cannot reliably tell whether they are talking to a human or a machine.
Passing such a test requires more than clever responses—it demands language understanding, knowledge representation, reasoning, learning, perception, and physical interaction with the world. Yet most AI researchers eventually moved away from this goal, seeing imitation as less important than understanding the underlying principles.
Thinking humanly
Another approach aims to model how humans actually think. This requires insights from psychology, neuroscience, and cognitive science—using experiments, brain imaging, and computational models to test theories of the mind.
These systems are not judged solely by whether they get the right answer, but how they arrive at it. While valuable for understanding human cognition, this path is constrained by how little we still know about the brain itself.
Thinking rationally
Long before computers, philosophers sought formal rules for “right thinking.” Logic, probability, and mathematical reasoning offered ways to derive correct conclusions from known facts.
This tradition influenced early AI systems built on symbolic logic. But perfect reasoning requires perfect information—something the real world rarely provides. Uncertainty, incomplete data, and limited computation make pure logical reasoning insufficient on its own.
Acting rationally
The most influential definition of AI today focuses on rational agents: systems that perceive their environment and take actions to achieve the best possible outcome, given what they know.
Rationality does not require human-like thought. A reflex can be rational. A statistical model can be rational. What matters is whether the system consistently chooses actions that advance its objectives under uncertainty.
This framing connects AI to control theory, economics, statistics, and operations research. It also gives us something engineers love: a clear specification.
Key idea: Modern AI is best understood as the study and construction of agents that do the right thing, according to a defined objective.
The limits of perfect objectives
Here’s the catch: real-world objectives are rarely fully specifiable.
A self-driving car should be safe—but how safe? It should be efficient—but not reckless. It should obey rules—but also adapt to human behavior. Encoding these tradeoffs perfectly is impossible.
This mismatch between human values and machine objectives is known as the value alignment problem. A highly capable system pursuing the wrong objective can behave in ways that are perfectly logical—and deeply undesirable.
The solution is not smarter optimization, but humble intelligence: systems that recognize uncertainty about their goals, learn from humans, act cautiously, and defer control when needed.
From intelligence to beneficial intelligence
The traditional “rational agent” model assumes we can fully specify what we want. In reality, we can’t. As AI systems become more capable and autonomous, this assumption breaks down.
The long-term challenge of AI is not just intelligence, but beneficence: building systems that are provably aligned with human intentions, even when those intentions are incomplete, implicit, or evolving.
This shift—from optimizing fixed objectives to cooperating with human values—may define the next era of AI research.
Bottom line
Artificial intelligence is not one thing. It is a family of ideas about intelligence, action, reasoning, and responsibility. The most powerful systems today descend from a simple principle: rational action under uncertainty.
But the future of AI will depend less on raw capability and more on judgment—on how machines decide what to optimize, when to act, and when not to.
Understanding these foundations isn’t academic nostalgia. It’s how we decide what kind of intelligence we want to build.
