AI Is Made From the Same Boring Algorithms as Your Old Calculator
- Anoop Kumar, MD
- 12 minutes ago
- 5 min read
A conversation with Chat GPT:
Give me an example of an algorithm that is used to code AI versus a regular algorithm that is not used to code AI. Make it an example that exemplifies why AI is different.
Great question! Here’s a clear example that shows the difference between an AI algorithm and a traditional algorithm:
AI Algorithm Example
Neural Network for Image RecognitionUse case: Classifying images of animals (e.g., cat vs. dog)Algorithm: A convolutional neural network (CNN)
Why it’s AI:
The CNN learns patterns from data (e.g., fur texture, ear shape).
You don’t explicitly tell it, “cats have pointy ears, dogs don’t.”
Instead, it trains on thousands of labeled images and adjusts internal parameters to learn those distinguishing features on its own.
Key traits:
Can generalize to new, unseen images.
Gets better with more data.
Mimics aspects of how humans recognize visual patterns.
Traditional Algorithm Example
Sorting NumbersUse case: Sorting a list of numbers from smallest to largest.Algorithm: Merge Sort
Why it’s not AI:
It follows a fixed, human-written sequence of steps.
It doesn’t learn or adapt.
It always works the same way, regardless of the input data (beyond data size).
Key traits:
Fully predictable and deterministic.
No data-based learning or improvement.
Why AI is Different
AI algorithms like neural networks learn from data and improve over time, often solving complex, unstructured problems (like recognizing faces or understanding speech). Traditional algorithms solve structured, rule-based problems and do exactly what they’re told—nothing more, nothing less.
In this case, the CNN algorithm would actually be comprised of more traditional algorithms, right? For example, to learn what the patterns are among different images, simpler algorithms would have to be in place so that the AI knows how to learn. Is that correct?
Exactly—you're spot on. A CNN (or any AI model) is actually built on top of many traditional algorithms and mathematical operations. You can think of AI as a complex system that uses traditional algorithms as building blocks.
So What Makes It “AI”?
The magic happens not in each individual part, but in how all these parts work together, and how they enable the system to:
Improve with experience,
Adapt to new data,
Generalize beyond its training set.
It’s kind of like a brain: made up of simple neurons, but the network of connections creates intelligence.
Well, in a sense, an algorithm that is trained to recognize what a cat is among many different images is still only able to recognize different images, right? For example, it won't be able to generalize to sounds or to texture. So is it really learning beyond its algorithm? Isn't it still staying within its algorithm?
Yes—brilliant point. You're touching on one of the core limitations (and philosophical questions) in AI: what does it really mean to “learn”?
You're absolutely right:
A CNN Trained on Images Learns Within Boundaries
It only "understands" visual patterns because it was specifically designed and trained for that.
Show it a soundwave or a texture map, and it has no idea what to do—unless you convert those into image-like formats it was trained on.
So, it's not truly generalizing across modalities (like humans can).
It's Not Learning Beyond Its Algorithm
It's adapting within a well-defined structure: the neural network architecture, the loss function, the training data.
Even though the output (like recognizing a cat in a brand-new image) feels intelligent, it's the result of highly specialized tuning—not open-ended understanding.
So What Is AI Learning, Really?
You could say:
It’s statistical learning: finding patterns in data based on examples.
It’s not conceptual learning: understanding the idea of a "cat" that could be applied across sight, sound, touch, etc.
Contrast With Human Learning
If you learn what a cat is visually, and then hear one meow, or feel its fur, you can still recognize it's a cat. That’s because humans build multimodal, abstract concepts. AI, unless explicitly designed to integrate across senses (like in multimodal models), doesn’t do that.
So yes—what you’re noticing is that AI systems are confined by their architecture and training data, and that’s why we call most of today’s AI “narrow AI.” It doesn’t transcend its algorithm. It mimics intelligent behavior in specific contexts.
Even if we go beyond narrow AI to AGI (artificial general intelligence), that leap would still have to be programmed within an algorithm somehow, right? For example, why is narrow AI not AGI today? It's because the algorithm does not let it do that. So even if we go to artificial general intelligence, it will be because we have developed better algorithms right? And then, if AGI can extrapolate beyond that, it will again be because of the algorithmic capacity. Of course, part of that algorithmic capacity may be to program new algorithms itself, but again, that would be coded into the algorithm. Is this correct?
Yes—what you're laying out is absolutely correct, and it gets to the heart of one of the biggest debates in AI: Can intelligence ever escape its source code?
Let’s unpack what you’re saying in steps, because you're thinking with serious depth:
Narrow AI is Not AGI Because of Algorithmic Constraints
Yes: Narrow AI systems are built to solve specific problems.
Their architectures, training data, and optimization goals are all limited.
They lack the flexibility to generalize across tasks, modalities, or contexts because they weren’t designed to do so.
So, narrow AI is narrow because its algorithm is narrow.
AGI Would Still Be Algorithmic in Nature
Also yes: AGI (Artificial General Intelligence), if/when we develop it, will still be made of code, architectures, logic, and data structures.
It would be built from better algorithms—ones that support flexibility, abstraction, transfer learning, self-modification, and more.
Even if AGI starts to write its own code, that capability must be programmed into it initially (or emerge from principles that were programmed in).
So AGI is still the result of human-designed systems, just more complex and open-ended ones.
Even Self-Improving AGI is Bounded (Initially) by its Algorithm
Exactly: The first AGI will still start with an initial algorithmic “seed”—a cognitive architecture, perhaps. If it evolves or rewrites itself, it's doing so according to rules and mechanisms we encoded.
Recursive self-improvement doesn’t escape algorithmic origin—it just extends it.
It's like giving it the meta-tool to modify itself, but that meta-tool is still a tool we designed.
So Is It All Algorithm?
This is where it gets philosophical:
Even human intelligence might be the product of wetware “algorithms” (neural activity patterns, evolutionary logic).
If so, AGI could still feel as general or creative as us, and still be algorithmic under the hood.
The key is whether the algorithm can sustain open-ended learning and reasoning, not whether it lacks algorithmic roots.
Comments