The Hard Truth Big Tech Doesn't Want You to Hear About Artificial Intelligence.

Intellistake Technologies Corp
Editorial
10 min read
An op-ed by Gregory Cowles, Chief Strategy Officer and Co-founder of Intellistake
Disseminated on behalf of Intellistake Technologies Corp.
Too Busy to Read? Listen Here
0:00 / 0:00
I've been thinking about this a lot lately. Maybe it's because I've spent over a decade in the trenches of digital currencies, and the last five years watching AI explode into mainstream consciousness. Or perhaps it's because my brain got wired a certain way during my many years working day-to-day in engineering, where I learned that every system has inefficiencies hiding in plain sight.

That background taught me to see things differently, to spot problems that others might overlook. These days, companies bring me in to help them navigate complex technology strategies and bring them to market, but I can't shake the systematic thinking that comes from years of troubleshooting critical systems. And here's what keeps me up at night: we're building the future of AI on a foundation that has some serious structural problems.

What's interesting is that while everyone's focused on the Grok or ChatGPT’s latest features, there's this whole other development track happening that's securing major multi-million dollar hardware partnerships and approaching AI development from a fundamentally different angle …and most people don't even know it exists.

I'm going to walk you through why this matters—this is probably longer and more involved than your typical tech/business blog, but once you see the full picture of what's actually happening and who's really positioning themselves for long-term control; it changes everything you think you already know about AI's future..

Let me explain what I mean…

The Strategic Thinker's Dilemma

There's this peculiar thing that happens when you spend your career looking for better ways to solve problems. You start seeing inefficiencies everywhere. I see a traffic light, and I'm mentally redesigning the timing algorithm. I use a banking app, and I'm sketching out how blockchain could eliminate three unnecessary intermediaries. It's like having X-ray vision for operational waste—sometimes enlightening, often frustrating.
This mindset is probably why companies approach me help navigate and laymanize complex technology landscapes to help onboard public interest. They need someone who can step back from the noise and find the signal, someone who can spot the patterns that others miss.

The biggest innovations never come from those grand, moonshot ideas that get all the TechCrunch headlines. They come from someone looking at an everyday annoyance and thinking, "There has to be a better way."
The Post-it note wasn't born from a grand vision to revolutionize office communication… it came from a guy who needed a bookmark that wouldn't fall out of his hymnbook. Velcro came from a guy annoyed by burrs sticking to his dog. The computer mouse emerged from someone frustrated with command-line interfaces…

..you get the picture, but that's exactly how I approach blockchain and AI. Not as separate, revolutionary technologies, but as complementary solutions to very real, very annoying problems every one of us deals with every single day.

The Efficiency Equation: Why These Technologies Belong Together

Think about blockchain for a moment. Strip away all the crypto speculation and NFT nonsense, and what do you have? A system that provides two things we desperately need: efficiency and immutability. It's like having a filing cabinet that never lies and never forgets, maintained by a thousand different librarians who all double-check each other's work.

Now add AI to that equation. Suddenly you have not just an incorruptible record-keeping system, but one that can learn, adapt, and optimize itself. It's like that filing cabinet could reorganize itself based on what you actually needed, predict what documents you'd want next, and even generate new insights from the patterns it observed.
But here's where things get interesting—and where my systematic thinking starts waving some very large red flags..

The way we're currently deploying AI creates some serious vulnerabilities that most people aren't even considering…

The Centralization Problem: Who's Really in Control?

“If you're building a bridge, should the strength and design be optimized for the people who need to cross it, or for the construction company's profit margins?..”
I've been fortunate enough to work with AI and digital currency portfolios worth over $2.5 billion, which means I've seen how the sausage gets made in both centralized and decentralized systems. The difference is stark, and frankly, it should concern anyone who cares about the direction technology is taking us.

When big centralized tech companies build AI, they're not building it for humanity. They're building it for shareholders. That's not a moral judgment—it's just basic economics. Every decision, every feature, every capability gets filtered through the lens of "How does this generate revenue?" or "How does this create competitive advantage?"

From a business leadership perspective, Great!, but think about it this way: if you're building a bridge, should the strength and design be optimized for the people who need to cross it, or for the construction company's profit margins?..

Or imagine if all the world's libraries were owned by a single corporation that could decide which books you're allowed to read, which research you can access, and which ideas get buried in the basement.. Sounds like a familiar historical dictatorship or sci-fi flick we’ve maybe seen before. But the incentives matter. They matter more than most people realize.

With decentralized AI built on blockchain, we flip that equation entirely. Democratic ownership means democratic governance. Instead of a boardroom in Silicon Valley deciding how AI develops, we have actual users—all of us—having a say in the direction of the technology that's reshaping our world.

This isn't just idealistic thinking. It's practical risk management. Because what we're building right now has some seriously fragile points.

The Single Point of Failure Problem

Here's a thought experiment that should terrify anyone in business: What happens if ChatGPT (or Claude, Grok etc)  just... stops working tomorrow?

I'm serious. Take a moment and think about it. How many businesses would grind to a halt? How many students couldn't complete their assignments? How many writers would stare at blank screens? We've become so dependent on these centralized AI systems that a single point of failure could cascade through the entire global economy.

This isn't hypothetical thinking—this is basic systems design. Any engineer who's worked on critical infrastructure will tell you that single points of failure are the enemy of robust systems. I learned this lesson managing energy networks where a single transformer failure could black out entire regions. It's like building a city with only one road leading in and out. Or imagine if all the world's electricity came from a single power plant, all global communication ran through one server, or all financial transactions required approval from one bank.. you get the picture. When that system fails—not if, but when—everything connected to it falls like dominoes.
Decentralized systems eliminate this risk entirely. It's like the difference between having one massive power plant serving an entire city versus having solar panels on every roof, wind turbines in every backyard, and micro-grids that can operate independently. When one goes down, the lights stay on.

The LLM Misconception: Missing the Forest for the Trees

Now, let me address something that's been bugging me for months. Most people think AI is just ChatGPT and its cousins. That's like thinking the internet is just email, or that transportation is just bicycles. Large Language Models (LLMs) are impressive, sure, but they're also Artificial Narrow Intelligence—essentially very sophisticated pattern-matching systems that excel at one specific task: predicting the next word in a sequence. It’s like the Microsoft Paperclip from 1995.. but with a high school diploma.

But here's what most people don't realize: LLMs represent maybe 5% of what AI actually encompasses. The real AI landscape is vast and diverse, like a sprawling ecosystem where LLMs are just one species among hundreds.
Take computer vision, for instance. We've got AI systems that can diagnose skin cancer from photos more accurately than dermatologists, identify crop diseases from satellite imagery, and guide autonomous vehicles through complex traffic scenarios. These systems process visual information in ways that are fundamentally different from how LLMs process text.

Then there's reinforcement learning—AI that learns through trial and error, like a digital version of how we learned to ride bikes. These algorithms have mastered games like Go and StarCraft, but more importantly, they're optimizing energy grids, managing supply chains, and even discovering new drug compounds by essentially "playing" with molecular structures until they find combinations that work.

Neural networks come in dozens of varieties: convolutional networks that recognize images, recurrent networks that understand sequences, generative adversarial networks (GANs) that create entirely new content by having two AI systems compete against each other. It's like having a toolbox where each tool is designed for specific problems.

But perhaps most intriguingly, we're seeing the emergence of neuro-symbolic AI—systems that combine the pattern recognition of neural networks with the logical reasoning of traditional symbolic AI. Think of it as giving AI both intuition and rationality. These systems can not only recognize a cat in a photo but also reason about why that matters in a given context.

There's also neuromorphic computing, which mimics the actual structure of brain neurons, and quantum machine learning, which leverages quantum mechanics to process information in ways that classical computers simply can't match. We're talking about AI that could theoretically solve certain problems exponentially faster than anything we have today.

The point is, when people say "AI," they're usually referring to one narrow slice of a much larger pie. And that narrow slice—LLMs—isn't going to magically evolve into Artificial General Intelligence any more than a really good calculator is going to become conscious. AGI will likely emerge from the intersection and integration of multiple AI approaches ..not from making ChatGPT bigger and bigger.

The AGI Question

Here's where it gets really interesting—and where my strategic thinking diverges from conventional wisdom. When Artificial General Intelligence (AGI) arrives—that's AI that matches or exceeds human intelligence across all cognitive tasks, not just narrow specializations—it won't be developed by Google or OpenAI or Microsoft.

I know that sounds counterintuitive. These companies have the resources, the talent, the data. But they also have the constraints. AGI isn't going to emerge from a system designed to maximize quarterly earnings reports. It's going to come from the intersection of multiple AI technologies, working together in ways that no single corporate lab can orchestrate.

Think of it like the internet itself. No single company "invented" the internet. It emerged from the intersection of multiple technologies, protocols, and innovations, many of them developed by different groups with different motivations. The result was something far more powerful than any single entity could have created.

That's what I see happening with decentralized AI development. Researchers sharing findings on immutable blockchains. AI models trained collaboratively across distributed networks. Innovation happening not in corporate silos, but in the open, transparent, democratic way that's always produced our most transformative breakthroughs.

The Path Forward: Building What Actually Work

Look, I'm not some starry-eyed idealist who thinks decentralization solves every problem. I've been in this space long enough to see plenty of projects crash and burn because they prioritized ideology over practical solutions.

But I am someone who's spent years looking at complex systems and asking, "What could go wrong?" And when I look at our current AI trajectory, I see a lot of things that could go very wrong indeed.

The combination of blockchain and AI isn't just additive—it's multiplicative. We're not just talking about better AI or better blockchain. We're talking about a fundamentally different approach to how technology serves humanity. One where the people using the technology have a say in how it develops. One where a single point of failure can't bring down critical systems that millions depend on.

Most importantly, we're talking about resilience. About building systems that can't be shut down by a corporate decision, can't be controlled by a handful of executives, and can't be weaponized against the very people they're supposed to serve.

Early Signals: The Decentralized AI Movement is Already Here

The future of decentralized AI isn't just theoretical—it's already taking shape. Consider what's happening with projects like SingularityNET, which has been quietly building decentralized AI where anyone can access AI services without going through Big Tech gatekeepers. Their research and development efforts, now part of the broader Artificial SuperIntelligence Alliance, represent exactly the kind of collaborative, open approach that I believe will define the next phase and future of AI’s evolution.

The ASI Alliance itself—powered by the FET token—is fascinating because it has created a practical framework for decentralized AI governance. Instead of AI development happening behind closed doors, we're seeing transparent, community-driven decision-making about how AI technologies should evolve.

What's particularly interesting is how these decentralized projects are securing partnerships with the same hardware providers that Big Tech relies on. SingularityNET recently announced a $53 million investment in AI infrastructure, incorporating cutting-edge GPUs and CPUs from Nvidia, AMD, and Tenstorrent, along with advanced AI servers from ASUS and GIGABYTE1. Fetch.ai has initiated a $100 million investment campaign specifically for Nvidia H200, H100, and A100 GPUs2. This shows that decentralized AI isn't just competing on ideology—it's competing on the same technological playing field as centralized players.

Jim Keller, CEO of Tenstorrent, noted that "Tenstorrent's heterogeneous compute featuring our AI accelerator technology are the perfect fit to help them accomplish this goal" when discussing the ASI Alliance's AGI development efforts. When established hardware companies are publicly endorsing decentralized AI initiatives, it signals that this isn't just a fringe movement—it's a legitimate technological path forward.

Other projects are tackling different pieces of the puzzle. Bittensor (TAO) is building decentralized machine learning networks where AI models compete and collaborate in open markets. Render Network (RNDR) is democratizing access to the computational power that AI development requires. Each represents a different approach to the same fundamental problem: how do we prevent AI from becoming too centralized, too controlled, too divorced from the people it's supposed to serve?

These aren't the only solutions, and they certainly won't be the last. But they're early signals of a much larger trend—one where the future of AI (not just LLMs!) is being decided by users and broader society, not corporate boards.
The future of AI is decentralized. Not because it's ideologically pure, but because it's better strategy. It's more robust, more democratic, and ultimately more aligned with human flourishing than anything a centralized system can ever deliver.

And in my experience, when you build something that actually works better for the people using it, adoption follows. That's not idealism—that's just good business sense.

Sources:

1  https://cointelegraph.com/news/singularitynet-invest-53-million-ai-infrastructure-modular-supercomputer
2
https://cryptoslate.com/fetch-ai-invests-100-million-in-ai-blockchain-tech-introduces-rewards-for-token-holders/
This report contains "forward-looking information" concerning anticipated developments and events related to the Company that may occur in the future. Forward looking information contained in this report includes, but is not limited to, all statements in respect of market overview herein and any implication the resulting issuer’s growth and development will follow general trends in the market, the operations and business segments of the Company, and timely receipt of all necessary approvals.

In certain cases, forward-looking information can be identified by the use of words such as "expects", "intends", "anticipates" or variations of such words and phrases or state that certain actions, events or results "may", "would", or "might" suggesting future outcomes, or other expectations, assumptions, intentions or statements about future events or performance. Forward-looking information contained in this report is based on certain assumptions regarding, among other things, the Company will continue to have access to financing until it achieves profitability; the timely receipt of regulatory approvals; the ability to attract qualified personnel; the success of market initiatives and the ability to grow brand awareness; the ability to distribute Company’s services; and the ability to successfully deploy the new business strategy. While the Company considers these assumptions to be reasonable, they may be incorrect.

Forward looking information involves known and unknown risks, uncertainties and other factors which may cause the actual results to be materially different from any future results expressed by the forward-looking information. Such factors include risks related to general business, economic and social uncertainties; the sufficiency of our cash to meet liquidity needs; legislative, environmental and other judicial, regulatory, political and competitive developments; the inherent risks involved in the cryptocurrency and general securities markets; the Company may not be able to profitably liquidate its current digital currency inventory, or at all; a decline in digital currency prices may have a significant negative impact on the Company’s operations; the volatility of digital currency prices; the inherent uncertainty of cost estimates and the potential for unexpected costs and expenses, currency fluctuations; regulatory restrictions, liability, competition, loss of key employees and other related risks and uncertainties; delay or failure to receive regulatory approvals; failure to attract qualified personnel, labour disputes; and the additional risks identified in the "Risk Factors" section of the Company’s filings with applicable Canadian securities regulators.

Although the Company has attempted to identify factors that could cause actual results to differ materially from those described in forward-looking information, there may be other factors that cause results not to be as anticipated. Readers should not place undue reliance on forward-looking information. The forward-looking information is made as of the date of this report. Except as required by applicable securities laws, the Company does not undertake any obligation to publicly update forward-looking information.