Bridging Minds: the Quest for True Artificial Intelligence

Bridging Minds: the Quest for True Artificial Intelligence

Feb 14, 2024

Feb 14, 2024

A deep dive into brain-inspired algorithms and the revolutionary impact of attention mechanisms.

The summer after I graduated from high school, my friend and I would skate through our rural town’s back roads, staying up all night. We would take breaks to smoke cheap gas station cigars we had recently come of age to purchase and discuss everything from wingsuiting to artificial intelligence. He was a big Kurzweil guy, and this was 2011 mind you.

As we entered college, we drifted apart but still shared some AI papers once in a while.

Discussing AI via Facebook Messenger [Cortical Learning Algorithm White Paper] [Middle Link] [Right Link].

I started college studying cellular biology and even worked in a graduate lab studying prostate cancer. I have also always been fascinated by technology and innovation. So now that I work for a behavioral science focused advertising firm, it feels like a perfect time to tell the fascinating story of how today’s learning models are modeled after the human brain.

Large Language Models

A lot has changed in the last decade in AI. With the introduction of large language models (LLMs) like GPT-4, it feels like the next decade will be even wilder. Even if Bill Gates is right that generative AI has plateaued, the progress made over just the last twelve months will continue to alter society for decades to come.

While society catches up to this recent progress, I believe the real problems LLMs are solving (natural language processing, machine translation, question and answer, content generation, and much more) will continue effecting change to a wide range of businesses and social structures.

The central breakthrough that accelerated this tech’s capabilities into an exponential state of growth was first introduced by Google employees in a 2017 white paper titled “Attention Is All You Need.” By effectively open sourcing their research before developing products that leverage this newfound methodology, Google changed the game — and they did so by mimicking human behavior.

The concept of “attention” in LLMs is not just a technical innovation but a reflection of how humans prioritize sensory information. Our brains focus on what matters most at any given moment, filtering out the rest. LLMs, through attention mechanisms, emulate this process by identifying and prioritizing the most relevant pieces of information within vast datasets.

The significance of this leap cannot be overstated — it is as though AI found a new gear, propelling forward our ability to interact with and through machines in ways that were previously the domain of science fiction.

Hierarchical Temporal Memory

LLMs did not exist in 2014 when my friend and I were chatting about AI. Instead, much of the research into machine learning was in hierarchical temporal memory (HTM). This approach to AI is also inspired by the human brain. HTM tries to model the neocortex’s structure and information processing mechanisms directly, mimicking the way neurons connect and form patterns of activity based on spatial and temporal inputs.

By comparison, LLMs, through deep learning techniques, are inspired more loosely by the brain’s neural networks. Today’s AI is built with more focus on the functionality and adaptability of human cognition than the physical structure of the neocortex. This evolution marks a shift from models that excel in specific tasks to those capable of generalizing knowledge across a wide range of domains, much like the human brain.

The Structure of the Neocortex

Your neocortex is that folded mass covering your brain which accounts for the vast majority of your overall brain volume. This purely mammalian innovation offers us a significant leg up on other animals through the development of memories, identity, and intelligence.

HTM puts such emphasis on mimicking the physical structure of the human brain because of a unique trait of the neocortex’s cellular structure. If you were to take a slice of the brain, you would see the neocortex holds a uniform cellular structure across its various regions, whether it is focused on vision, hearing, or language. This uniformity hints at a common approach to processing inputs, where raw sensory data is refined into increasingly abstract concepts.

Similarly, LLMs, through their training and operational framework, have reduced problems across different modalities — images, audio, or video — to text-based representations. This approach allows LLMs to leverage their extensive text-based training to process and generate content across these modalities.

The Thousand Brains Theory of Intelligence

The process by which a network of neurons assembles for us a coherent model of our world is a puzzle which the Thousand Brains Theory of Intelligence offers a revolutionary perspective on. Traditionally, neuroscientists believed in a singular, unified model for each sense.

Take a keychain, for instance. It was assumed that sensory inputs, whether auditory or tactile, would converge onto this solitary model, enabling recognition. However, a closer examination of the neocortex reveals it to be composed of approximately 150,000 cortical columns, each functioning like a mini-brain with the capacity to generate independent models based on sensory inputs. This raises a compelling question: why do we perceive the keychain as a singular entity rather than a fragmented collection of sensory inputs?

The answer lies in the extensive network of long-distance connections within the neocortex, facilitating communication between these cortical columns. The Thousand Brains Theory posits that these connections enable the columns to integrate their individual models, achieving a consensus that shapes our singular perception of objects and concepts.

This mechanism of consensus is not within our conscious awareness, yet it fundamentally shapes our interaction with the world. It is those cortical columns which learn and store information through the establishment of reference frames — spatial maps that organize knowledge and facilitate thinking and problem-solving. This insight not only demystifies the nature of knowledge and cognition but also outlines a blueprint for the development of truly intelligent machines, capable of reasoning and learning about the world in a manner akin to human intelligence.

The Future

I still sometimes imagine myself laying in the middle of a country road at midnight looking up at the stars wondering what the future will look like. There is so much serendipity in seeing the exponential uncovering of intelligence through the convergence of two fields I sincerely love — biology and technology.

But it also has me wondering about the value of intelligence moving forward. If intelligence ceases to be something humans are uniquely capable of, what then is the value of cognitive intelligence — a tool for labor that many of us rely on and seek out for our own enjoyment.

What happens when intelligent learning systems become embodied in advanced robotics? The era of robotics only effecting change to large industries is coming to a close. Expect to interact directly with intelligent robots in the next decade as they enter professional services. It will not be the first time humanity has adapted to great change, but this may be our grandest transformation yet.

It is as though we have created our own little aliens here on Earth. Tonight, some angsty teenager will stare up at the stars, modeling the glow of the Milky Way through visual sensory inputs and the breeze of the nighttime air through tactile sensory inputs, wondering if there is anything else out there worth exploring.

Meanwhile, there is another kind of intelligence brewing here at home.

© Elijah Kleinsmith • All Rights Reserved