The Octopus and the Algorithm

The Octopus and the Algorithm

Jul 18, 2024

Jul 18, 2024

0:00/1:34

This audio recording was generated using Midjourney and is a clone of the author's voice.

This audio recording was generated using Midjourney and is a clone of the author's voice.

Biology

In the depths of the ocean, an alien intelligence thrives. With eight arms, three hearts, and a distributed nervous system unlike anything found in mammals, the octopus challenges our understanding of cognition and consciousness. As we grapple with questions of artificial intelligence and the nature of the mind, these cephalopods offer a fascinating case study in alternative forms of intelligence.

"Two experiments in the evolution of large nervous systems landed on similar ways of seeing," writes Peter Godfrey-Smith in his book Other Worlds. Yet when we compare octopus brains to our own, "all bets—or rather, all mappings—are off." There is no direct correspondence between their neural structures and ours. In fact, most of an octopus's neurons aren't even located in its brain, but in its arms. For octopuses, this means a radically different way of processing information and interacting with the world.

Consider the octopus arm. It's not just a limb, but a semi-autonomous agent. The central brain exerts some control, but much of the fine-tuning happens locally. It's as if each arm has a mind of its own, collaborating with the central intelligence to navigate and manipulate the environment.

This perspective aligns with emerging ideas in embodied cognition. Our experiences aren't just products of our brains but arise from the dynamic interplay between our nervous systems, bodies, and environments. The octopus, with its fluid form and distributed intelligence, takes this principle to the extreme.

AI, meanwhile, lacks a body entirely.

Intelligence

Large language models (LLMs), the underlying technology we colloquially refer to as “AI” today, also lack intent and agency. Yann LeCun, Chief AI Scientist at Meta, writes, “Agency (and planning) can't be a wart on top of Auto-Regressive LLMs. It must be an intrinsic property of the architecture.” He believes LLMs will never get us to general intelligence.

Intelligence, LeCun argues, isn't just about centralized problem-solving or language processing. It's about effective action in the world, adapting to complex environments, and perhaps, experiencing those environments in ways that are different from our own.

Humans are remarkable at adaptation, a hallmark feature of not only intelligence, but of creativity. My favorite anecdote on this point is the development of tactile vision substitution systems for the blind. These ingenious devices translate visual information from a camera into tactile sensations on the skin. What's fascinating is how the brain adapts to this new input. With practice, users don't just feel random vibrations or touches; they begin to perceive objects in space. A dog walking by isn't experienced as a pattern on their back, but as movement in the external world.

Interestingly, this perceptual shift only occurs when users can actively control the camera, highlighting the crucial interplay between action and sensation in shaping our subjective experience. This adaptability showcases the brain's plasticity and how our understanding of the world is intimately tied to our interactions with it.

Utility

It is that very interaction with the world that limits LLMs from competing with human-level intelligence, LeCun argues. He posits that if AI were even close to human-level intelligence then we would have “systems that could teach themselves to drive a car in 20 hours of practice, like any 17 year-old.”

MIT Technology Review summarizes this theory, “LeCun thinks that animal brains run a kind of simulation of the world, which he calls a world model. Learned in infancy, it’s the way animals (including humans) make good guesses about what’s going on around them.”

LLMs can also be good predictors but is textual data alone good enough to represent an effective “simulation of the world?” Or is today’s AI inherently limited? And if so, what purpose does this new, alien form of intelligence serve in the world today?

Creativity

This time last year, the University of Montana published research that “suggests artificial intelligence can match the top 1% of human thinkers on a standard test for creativity.” Specifically, the AI application scored “in the top percentile for fluency – the ability to generate a large volume of ideas – and for originality – the ability to come up with new ideas. The AI slipped a bit – to the 97th percentile – for flexibility, the ability to generate different types and categories of ideas.”

Dr. Erik Guzik, an assistant clinical professor at UM’s College of Business, led the study. The team used the long-established and respected Torrance Tests of Creative Thinking to assess 2,700 college students against ChatGPT. The official news release on their website noted Dr. Guzik was “surprised at how well it did generating original ideas,” calling that “a hallmark of human imagination,” and continuing, “The test evaluators are given lists of common responses for a prompt – ones that are almost expected to be submitted. However, the AI landed in the top percentile for coming up with fresh responses.”

How is it that Meta’s Chief AI Scientist and the UM study could come to such seemingly conflicting conclusions on today’s AI? It is true that LLMs excel at generating ideas in high volume, some of which are creatively compelling. It is also true that LLMs are barely able to take action in the real world, greatly limiting its ability to express creativity in the same way humans do. AI is an incredible sidekick for working through strategic and creative tasks, but until you prompt it, AI does nothing.

The world’s largest AI labs like OpenAI and Anthropic are working on agentic capabilities for LLMs. If you have ever worked with an AI agent, then you know just how far they have yet to go before they are useful in a reliable way. From my experience, today’s AI is best used as a “cognitive layer” in workflows. Whether you are asking a chat style AI interface to help brainstorm creative concepts or have a multi-modal AI interface convert research topics into short podcasts, you are asking AI to do some “thinking” for you—a technology of an entirely new breed, quickly becoming an essential part of the creative toolkit.

In the meantime, other industries are still catching up with this AI technology explosion. According to a joint report published by Forrester and the American Association of Advertising Agencies (4A’s), agencies are significantly ahead in adopting generative AI (genAI) compared to other sectors like general business and consumer markets. The report states, “A remarkable 61% of respondents reported that their agency is already using genAI, with a further 30% saying that their agency is currently exploring genAI use cases.”

At Signal Theory, we have gone one step further by introducing our own AI system, which we call AL after one of our original co-founders, Al Higdon. This gives us an avenue not only in which we can deploy AI tools internally, but also do so in a way that keeps humanity at the core of everything we do. Even as regulators struggle to keep up with tech advancements, we are well positioned to help marketing teams enter the future leveraging AI wisely not only through a deep understanding of how they work, but by keeping sensitive and confidential client data safe and secure.

Trust

My hope is that it is obvious why feeding proprietary data into a system of alien intelligence should be done with the upmost care. If I’m sitting in our client’s seat, I would want to know how my agencies manage approval and use of AI technologies within their organization. I don’t want a model built for learning and retaining information to be exposed to my future product release campaign, or worse, my customer data. This was another major driver for developing and adopting AL.

Whether you are adopting a new AI tool or simply navigating an Excel file of campaign metrics, we as advertising agencies make a promise to clients that we will protect their data. This comes to life through encrypted data management, user training and IT policy. What changes when you start to talk about AI tools is their ability to learn from the data input. AL is an intelligence of a different kind, developed to learn only contextually and not across conversations. This is just one of many safeguards built into the system.

As we embrace AI's potential in creative industries, we must remain mindful of its current limitations and ethical implications. The gap between AI's current capabilities and general intelligence reminds us of the persistent uniqueness of human cognition. Human creativity and vision can transform the mundane into the extraordinary. It’s about seeing potential and beauty where others might see only simplicity. It’s about our interaction with the world. When we use our imagination, we can reframe and elevate the ordinary, turning it into something meaningful and significant. We as humans get to make those choices.

The future of creativity lies not in AI alone, but in the synergy between human ingenuity and artificial intelligence. By prioritizing responsible AI use, data security, and client trust, we can harness AI's strengths while mitigating its risks. Moving forward, our focus should be on developing AI as a complementary tool rather than a replacement for human ingenuity. 

As we navigate this evolving landscape, our goal should be to leverage AI to enhance human potential, creating a future where technology and humanity work hand in hand to solve complex problems and unlock new expressions of thinking and creativity.

© Elijah Kleinsmith • All Rights Reserved