Why AI’s Grand Promises Keep Falling Flat (and The SEO “Canary in the Coal Mine”)

You’ve likely read, or at least heard of, Matt Shumer’s viral essay, “Something Big is Happening.” It’s the latest piece of “weaponized hype” to set the internet on fire, amassing some 80 million+ views on X and landing Shumer on CBS Mornings and CNBC to explain that AI has now crossed the threshold of “judgment” and “taste.”

Sure, it’s a compelling, scary, and ultimately hollow narrative that suggests we’re months away from a total displacement of human labor. As we’ll see below, many of the AI movers and shakers have spent the recent weeks peddling this AI fearmongering, which for anyone who uses AI daily, sounds like a load of bullshit.

The more likely fear we should have is, like, using AI for nuclear war strategy – but let’s save that for another day…

The Relentless Drumbeat of AI Hype

We’re currently watching a masterclass in corporate overhype. Microsoft AI CEO Mustafa Suleyman recently predicted that within 12 to 18 months, “most if not all professional tasks” for lawyers, accountants, and marketers will be automated. Elon Musk doubled down at Davos, suggesting AI will be smarter than any individual human by next year and “smarter than all of humanity collectively” by 2031.

Adding fuel to this fire is Andrew Yang. Recently, the entrepreneur and presidential candidate issued a grim warning about the disembowelment of white-collar work. Yang warns people to take the issue very seriously, predicting that millions of office workers will be laid off in the next 18 months. According to him, companies that streamline will be rewarded if they cut headcount and punished if they don’t, potentially reducing U.S. office employment by 20% to 50% in the coming years.

In my opinion, the pattern is clear: This is a valuation-protection scheme, not just a technological revolution.

Apparently, big tech is now in the business of fear-mongering its way to a better stock price. According to a Reuters/Ipsos poll conducted in August 2025, nearly half of Americans believe that AI is bad for humanity. The concerns are visceral: 71% fear permanent job loss, 77% worry about AI-fueled political chaos, and 61% fear staggering electricity consumption. Rather than witnessing “innovation,” we’re seeing a manufactured confidence crisis.

Look, there’s no denying that AI is completely rewriting the rulebook. It will definitely bring us some amazing breakthroughs, but let’s face it, it will also stir up a lot of trouble. On top of all that, you can bet there are people behind the scenes who already know how to work the system, and you’re seeing this media hype being turned into a massive payday for many.

The “Efficiency” Grift 

What’s truly driving the current wave of press manipulation? Let’s call it what it is: a coordinated grift to keep stakeholders mesmerized and P/E ratios in the stratosphere. AI provides a convenient, sci-fi-flavored cover for mass layoffs. It’s much easier for a CEO to tell the board, “We are lean because of AI-driven efficiency,” than to admit, “We over-hired during the boom and our middle management is redundant.” The pink slips aren’t coming because a robot is doing a better job; they’re coming because companies must trim fat while appearing “innovative” to Wall Street.

Jeffrey Funk of the Walter Bradley Center also notes in Mind Matters that the tech sector is effectively terrified. There’s a $35 trillion AI bubble supported by “circular financing” – stocks like Nvidia and Microsoft investing in OpenAI, which then uses the cash to buy chips and cloud services from its own investors.

The market is catching on. Microsoft’s stock plunged 12% in early 2026 after revealing that 45% of its $625 billion cloud backlog relates to OpenAI, which is expected to lose $14 billion this year and $115 billion by 2029. Additionally, bondholders are suing Oracle over its “blindside” $50 billion debt binge for OpenAI data centers. For funding massive data centers, Elon Musk even merged xAI with SpaceX at a $1.25 trillion valuation. This isn’t a productivity revolution; it’s a desperate race against time to stay solvent.

Voices of Reason

In a room full of sycophants, I’ve tried finding those who don’t talk to the world like we’re all really this stupid. In my research, I stumbled across Gary Marcus as the indispensable skeptic. A cognitive scientist and NYU professor emeritus, Marcus has spent years debunking the hype that drives Shumer’s alarmism. As he consistently points out, Large Language Models (LLMs) are not just pattern matchers, but also lack real reasoning, understanding, and reasoning skills.

According to Marcus, in AI, the “last 1% takes 99% of the work.” At a MIT IDE seminar, Artificial General Intelligence: Why Aren’t We There Yet?, he pushed back against the narrative of “triumph.” His point is that while an 80% accuracy rate for Netflix recommendations is fine, it’s catastrophic for medical diagnoses or autonomous vehicles.

“All-purpose, all-powerful AI systems have been promised for six decades,” Marcus argues, “but thus far still have not arrived.” Machine learning, he points out, is still far from human capabilities regarding inference and decision-making.

A Canary in the Coal Mine?

Working in the SEO industry, I’ve seen AI’s so-called chaos from the front row during the past three years. Basically, we became the marketing industry’s “canary in the coal mine” to see if we’d survive when people started saying AI would replace our jobs first.

As such, I want to examine AI hype through the lens of SEO and illustrate how grand predictions fail to materialize into revolutions over the last three years. Rather than replacing humans, they serve as a recurring “lame hype story” designed to enrich early adopters and opportunists.

Year One: The AI Content Apocalypse That Never Was

A collective panic gripped the SEO industry three years ago. The narrative, pushed by aggressive AI startups and venture capitalists, was simple: “The content writer is extinct.” As a result, human-led editorial teams were no longer necessary. Instead, we were told that GPT-4 could generate infinite libraries of perfectly optimized articles for pennies with just a click. The promise wasn’t just efficiency, but total automation. Unless you fired your writing staff and integrated your CMS with an API, you were a dinosaur waiting for a massive asteroid to strike.

The reality? We entered a “slop” epidemic. Rather than a productivity revolution, we’ve seen a flood of unrefined AI output. With content production ramped up to the max, the limitations of these models became painfully apparent. As noted by experts at Level Agency and Screpy, raw AI content struggles to grasp deep contextual understanding, nuanced industry jargon, and emotional resonance.

In addition, the technical flaws created a liability for brands by resulting in “hallucinations” that were not based on facts and by recycling outdated training data. These models don’t “know” facts. Instead, they predict the next likely word in a sequence. As a result, the internet became a giant, self-referential echo chamber full of generic, surface-level content.

Google’s counter-strike: The rise of “information gain.”

While the hype men predicted AI would “game” the algorithm, Google was well ahead of everyone else, making sure it’s Helpful Content System prioritizes an “Information Gain Score” to weed out crappy content. Google is clear: AI isn’t penalized for being AI, but content that adds no unique value is absolutely purged.

If AI-generated articles merely rehash the same five “tips” on page one, they are hidden from search results due to low Information Gain scores. In its 2026 guidelines, Google explicitly rewards ‘Experience’ (the fourth ‘E’ in E-E-A-T), which raw AI cannot satisfy since it has never touched a product, visited a location, or addressed a crisis.

The indispensable human element.

Despite its failure, the “Content Apocalypse” ignored the fundamental reason people use search: they want to hear from someone they trust. As Seek Momentum and Gracker.ai have argued, AI cannot replace human experts’ strategic empathy and unique reasoning. Unlike automated text generators, human writers are skilled at providing judgment, proprietary data, and contradictory opinions that challenge the status quo, which build backlinks and authority for their brands.

Now that the data is in, we can see that pure AI content teams haven’t replaced the industry. They’ve simply become the “bottom feeders” of the internet. In 2026, successful SEO strategies will combine AI as a high-powered research assistant with humans to support strategy, fact-checking, and original information gain. It turns out the “dinosaurs” with real expertise survived the asteroid’s strike.

Year Two: The Mirage of “AI Visibility” and “GEO”

With the “content apocalypse” becoming a dull ache, the grifters needed a new hook. In mid-2024, the narrative shifted from “AI will write your content” to “AI will hide your content.” This led to the rise of a secondary market selling “AI Visibility” and “Generative Engine Optimization (GEO)”. Suddenly, CMOs learned that traditional search rankings are dead, and they should pay $500 per month for dashboards tracking their “share of voice” within ChatGPT and Perplexity.

The technical impossibility of “tracking” AI.

What is the fundamental problem with these visibility tools? They sell a false sense of security based on technical impossibilities. You can think of traditional Google Search as a relatively stable database of results. Although rankings fluctuate, tools can scrape the ‘index’ and show you who’s sitting around Position #1 at a given time.

Because AI models generate responses based on probabilistic patterns and often ‘ground’ their answers in real-time search results, the output is inherently fluid. If you ask an AI, ‘Who is the best SEO consultant?’ ten times, you might get ten different answers. The AI doesn’t simply pull from a list; it synthesizes a unique response every time based on its training data and the query context. As a result, there isn’t a static leaderboard. As such, in the world of AI, ‘Position #1’ doesn’t really exist — relevance does

Furthermore, platforms such as OpenAI and Anthropic do not provide public access to real-time user query logs. In other words, tools that claim to monitor your brand mentions are really black boxes. They don’t know what real people are typing into the chat box. AI cannot see the personalized context, the history of a conversation, or the geographical nuances that affect its response. Whenever a tool claims that “AI Visibility” went up by 12%, they are measuring an unreal data point that does not exist.

The synthetic data trap.

In order to avoid this lack of real-world data, most AI visibility tools rely on synthetic data. Using their own scripts, they feed “simulated prompts,” hypothetical questions their developers think people may ask, into the AI API and count how frequently brands are mentioned. This isn’t tracking; it’s high-tech guessing.

By using this methodology, you create a huge feedback loop of irrelevant information. If the tool’s developers use industry jargon in their prompts, the “visibility” they report will be flawed. According to recent research from SparkToro, you would need to run the same prompt over 100 times only to get a statistically significant baseline for AI recommendations. In most tools, a prompt is run once or twice, and a ranking is generated for, let’s say, a board meeting in a PDF file.

GEO: Old wine in a digital bottle.

Next ,we have “GEO“, the rebranded cousin of SEO. Some proponents of GEO claim the strategy requires a shift in mindset. It’s as if they’ve discovered fire when they talk about “semantic fusion” and “citation-worthy authority.” In reality, adding structured data, using clear headings, and building high-authority backlinks are the same fundamental principles most SEO professionals preach.

In recent years, everyone in marketing has been chasing AI SEO tricks that don’t work. The problem? In their effort to ‘game’ the system, marketers over-polish their websites, sometimes taking the advice reminiscent of “black hat” SEO tactics from 2004.. Even if you spend hours tweaking your hidden text, the AI will still point people to the same Wikipedia pages or Reddit threads it trusts. By adding a few clever keywords, you can’t trick an AI into recommending you; it’s looking at the bigger picture of what the world says about you.

For many agencies, GEO is simply a rebranding that enables them to justify higher fees. As cognitive Gary Marcus suggests, it’s fundamentally difficult to “optimize” for a system that doesn’t actually understand the content it’s summarizing. In the eyes of some, GEO is just more hype in an already forming bubble.

However, it’s not all smoke and mirrors. While much of the advice sounds like recycled SEO advice, like “use headers” and “be authoritative,” Large Language Models (LLMs) process data differently.

Unlike traditional crawlers, AI models prioritize semantic density and how well your content matches concepts in their training data. As part of their synthesis, they aren’t just looking for a match; they also need a reliable source. As a result, getting your brand mentioned along with the information it’s looking for, over and over again, helps AI match you with these prompts and discussions.

The bottom line? When GEO is marketed as a magic bullet, it often feels like a scam. Basically, it’s just high-level SEO with a few additional tweaks to ensure an AI can digest your facts.

Year Three: ChatGPT Didn’t Kill Google

It was 1 year ago that the narrative that ChatGPT would destroy Google reached a fever pitch. The alarmism was everywhere: Is it really worth it to click a blue link again if a chatbot can answer all of your questions? As if traditional search was a dinosaur watching the asteroid hit, investors panicked, predicting a “search-less” future.

The great scale mismatch.

The reality of 2026 has been a cold shower for those who believe Google is dead. Although there is a lot of noise, there is an enormous disparity in usage. There are still over 13.7 billion searches performed by Google on a daily basis as of February 2026. Despite ChatGPT’s impressive 2.5 billion daily prompts, the “Traffic Paradox” persists: Google sends nearly 200 times as much traffic to the open web as ChatGPT.

Even with all the talk about AI disruption, the math just doesn’t add up. If you want to check the weather, look up a stock price, or find the nearest hardware store, you don’t need a conversation buddy; you just need a traditional index that’s quick and easy to use.

Fundamental differences in utility.

Because these tools are fundamentally different, ChatGPT hasn’t killed Google. ChatGPT is an advanced language model, a “reasoning engine,” whereas Google is a real-time search engine with a comprehensive, live web index. Even though ChatGPT excels at debugging code and writing sonnets, it struggles with the “hallucination” problem and relies upon old training data, at times.

While Google thrives on the “now,” standalone chatbots can often be a day late and a dollar short when a major news event or product launches. This is a fundamental divide: AI excels at generative tasks (writing, brainstorming, coding), but Google remains the gold standard for retrieving information (finding the truth).

However, the line between the two is blurring rapidly. As a result of integrating its own generative AI, Gemini, directly into the search experience, Google has effectively bridged this gap. By baking their AI rivals’ best features right into the search bar, they have absorbed the best from their competitors.

The result? With this tool, users get the creative synergy of LLM and the real-time accuracy of the world’s best search engine. However, Google’s plan is clearly working despite the early skepticism. With immediate, AI-powered answers alongside traditional links, they’ve turned search into a more versatile assistant and proven that “search” and “answers” belong together.

The “AI mode” default that wasn’t.

About eight months ago, a Google product lead speculated that “AI Mode” (a fully generative search experience) would become the default. The SEO world went into a tailspin, concerned about the erasure of organic links. The company quickly walked back those claims, realizing that 90% of users don’t want a generative-only interface, which is computationally expensive and, frankly, not what users want.

Instead, we’ve seen a pivot to a “blended” model. As long as search remains robust, it’s because it provides a feature that chatbots do not: verification. With AI-generated “slop” flooding the Internet, users are increasingly seeking out original content. The blue link isn’t a relic; it’s the receipt.

The Real Impact (or Lack Thereof)

Is AI useless in SEO? No. There is, however, a difference between a new shiny tool and an overhaul of the entire structure. We have moved from the fantasy of autonomous SEO to a much more pragmatic reality in 2026.

Incremental tools, not revolutionary overhauls.

Artificial intelligence has not rewritten the laws of search; instead, it has automated them. AI today has a much greater industrial impact than an artistic one. We’re seeing massive efficiency gains in “grunt work” — tasks that used to take days are now completed in minutes. With AI, you can identify keyword clusters, develop content outlines, and perform technical audits that flag broken links before they reach search engines. Though it’s an amazing tool for finding gaps in content and analyzing SERP trends, it remains an optimization tool. Rather, it accelerates the strategy execution, not replaces it.

AI as a co-pilot, not an autonomous driver.

The fatal flaw of the “automated SEO” narrative is its assumption that machines can understand human intent without human involvement. As we’ve learned, AI works best when it acts as a co-pilot, providing data-backed insights and letting experts take the wheel. A bot may be able to write an 800-word article coherently, but it lacks live experiences and emotional intelligence to build trust.

Search engines have adapted too; current algorithms detect “semantic noise,” grammatically perfect content with no information gain. If AI-only content lacks human oversight to inject unique perspectives or real-world case studies, it is largely doomed to end up on page ten. For real success, the machines need to crunch the data, while humans have to decide what is most important to the business and brand.

Beyond the Bubble

SEO, especially in AI, has been a 3-year performance of over-hyped predictions aimed at inflated valuations and corporate profits. From “Content Apocalypse” to “GEO,” every “revolutionary” wave has collided with quality, relevance, and human expertise.

The “AI bubble” warnings by Gary Marcus have proven to be prescient. What a CEO says on stage differs from what a tool does in the field by a canyon. In 2026, you shouldn’t chase the latest viral essay or investor hype as the key to sustainable success. Rather than focusing on bots, it should focus instead on building a brand people trust.

Real people doing real jobs aren’t going anywhere; they’re just getting better at them with a little help from their AI friends.