
This must be how medical professionals felt in 2020. Increasingly, I am hearing myths and misconceptions about AI stated as though they are common knowledge. At a recent conference I attended, one speaker who was invited to talk about their AI system recognized the “significant energy impact of AI” – a notion that, quite frankly, seems absurd to me for a business with a large physical building in a metropolitan area. After all, in the United States, buildings account for 40% of total energy usage, including 75% of all electricity use.
It’s not just energy use – there are numerous misconceptions around topics like artists’ rights and detection of AI-generated content. So let’s get into it: here is my attempt at debunking the 3 AI myths and misconceptions I’ve been hearing spread most.
Energy Use
This just in: modern life requires energy. Putting aside the comparisons of ChatGPT against other common types of technology for a moment: ask yourself, how many of our modern conveniences are you willing to sacrifice for energy saved?

It turns out there is no such thing as wealthy, low-energy countries. Stated simply: prosperity and progress are energy-intensive. Every advancement—from smartphones to refrigerated food to instant information—comes with a power bill.
Many of us remember “the 3 Rs” from grade school: reduce, reuse, recycle. So, I can empathize with those who suggest generating Studio Ghibli style art of our cats isn’t the best way to reduce energy consumption. At the same time, onboarding of technology often comes with this kind of play. As Richard Feynman said, “The trouble with computers is you play with them. [...] If you’ve ever worked with computers, you understand the disease—the delight in being able to see how much you can do.”
And frankly, I don’t think people should feel bad for participating in this joy. A comprehensive report from IEA estimates that data centers, which are used for far more than just AI, will only account for 1.5% of global electricity next year, even given AI-related growth factors. AI has virtually nothing to do with our current climate challenges.
Going forward, the concern for many is that data centers will need more and more electricity to power exponentially increased demand. Data centers, which help power everything from gaming platforms to Netflix (neither of which we shame our fellow peers for using), also outpace other industries in adopting renewable energy sources. IBM's uses renewables sources for 74% of their electricity consumption, Amazon uses 85% renewable energy across their cloud business, and Google and Microsoft are both targeting 100% by 2030. Many data centers are applying the same efficiency technologies they use internally to benefit their local communities, upgrading infrastructure in schools, hospitals, and other public facilities. The conflated claims on power and water utilization miss the reality of data centers' relatively small impact on our current climate challenges, and the even smaller impact AI has on the total scope of global energy utilization.


My hope is that we can continue to invest in renewable energy such that energy use is no longer a barrier to technological progress of any kind. My advice to anyone thinking about this issue is to focus on how you would regulate it and what kind of impact that governance could have on our competitiveness on the world stage. And finally, I would love to see an end to the shaming around AI use. Our climate challenges are best faced in unity and with the understanding that most common folks have very little sway on our total global energy use.
Artists’ Rights
This one is less myth and more logical inconsistency, though there is plenty of misconception here too. First, large language models (LLMs), the technology underpinning today’s AI, do not have a “database” that it references for queries, whether it's text- or image-based. Rather, LLMs learn from material during a “pre-training phase,” then, once “fully baked,” respond purely from the “weights” it develops during its pre-training. It has no real-time reference of its training material. Similar to the way we learn, it all comes down to pattern recognition, associations and predictions. This paradigm-shifting reality often warps people’s ability to understand and think about AI accurately.
If you have ever watched a movie, read a book, or drove down the highway and glanced at a billboard, then congratulations: you’ve trained on copyrighted material. Therefore, it may be logically consistent to say, “I’m against any form of silicon/non-human intelligence,” which is to take a completely anti-AI position. That is more logically consistent than, “AI shouldn’t be able to train on copyrighted material, but I’m not completely anti-AI.” How can we expect to develop intelligence without exposure to the world? The music industry represents an art form that has long been at the intersection of copyright, originality, and technology. This isn’t the first time art at large has been under "existential threat,” but I think we’ll all be just fine.
This inconsistency, I believe, comes from the unexpected rise of AI as a non-emboddied entity. In sci-fi novels, AI often has some kind of physical presence in the real world. It’s much easier to accept an embodied AI participating in culture than it is a purely digital form of intelligence; our brains therefore naturally associate it with classical computing and form any following judgements based on that worldview. Like Japan, I think the US needs to adopt a more lenient approach to AI training, otherwise banning the technology completely and falling behind the rest of the world. There’s a certain historical imperative to this technology and I think if you accept its inevitability, then you should want information to be free.
Detecting AI-Generated Content
I was going to bite off a topic like AI’s ability to be creative or innovative, but at the risk of pissing off even more artists than I already have, I’ll target educational institutions instead. Detecting AI-generated content is inconsistent and unreliable at best and likely impossible altogether.

AI detectors fail precisely because AI-generated content mirrors human processes: it synthesizes learned patterns and recombines ideas creatively, rather than merely copying stored text. As discussed earlier, there is no “database.” Just as there is no definitive signature for human originality, there’s no foolproof method to distinguish AI’s nuanced output. This is exactly why OpenAI, the world’s most influential AI research lab, shut down its detector.
There are also much more reliable and novel solutions to content provenance. Last year, I helped create a guide for agencies who are looking to learn more about the impact this will have on the advertising industry; but of course I’m most interested in the technology itself. Simply put, this solution works by cryptographically signing original assets like photography on-device. That information is then permanently bound to a ledger that goes wherever the digital asset goes (e.g. a publisher). The result is verification of real content, such as breaking news photography from across the world.
Imagination
As we navigate the AI-driven future, clarity is crucial. Misconceptions can skew perception and policy decisions. By embracing the nuanced reality of AI’s impact, we can have the informed discussions necessary for responsibly harnessing AI's potential. The world tomorrow will not look like it does today and I believe it’s important to toss preconceived notions, keep an open mind, and embrace an imagination for what could be.
© Elijah Kleinsmith • All Rights Reserved