AGI is a remarkable new technology that enables computers to think like humans.

It's fundamentally different from today's AI, but already familiar to us from decades of sci-fi. Think C-3PO from Star Wars or Sonny from I, Robot — systems that can genuinely experience, learn and adapt to the world around them.

The invention of AGI (Artificial General Intelligence) will be the biggest technology revolution since the internet. AGI systems at human-level capacity and beyond will solve problems and achieve scientific breakthroughs we've never been capable of, potentially curing diseases and expanding life expectancy. Even AGI systems at 0.1x human capacity could transform manufacturing and construction, working autonomously without specific programming.

But while this vision is familiar from sci-fi, it's distant from today's innovations. The major AI companies claim that AGI will emerge from more advanced versions of their existing technology. However, they're building on the wrong foundation.

These companies follow a predictable pattern: they start building on a foundation despite known limitations, promising that with more power, data, and training, they can achieve AGI breakthroughs. When those limitations prove permanent, they conveniently redefine AGI to match whatever they've already built.

Linear Static Models

AI has seen two major transitions over the past decade. The first modern AI systems became popular around 2015 with Computer Vision, getting really good at solving well-defined problems. These systems use processing layers where every neuron activates and passes information forward. Through training with millions of examples, they can achieve high accuracy at specific tasks like image classification.

However, these models have two fundamental limitations: they require millions of training examples and can only solve the specific problems they're trained for. They can't generalize or learn new things beyond their training. Despite this, companies argue that with more complex models and more training data, these systems will eventually solve any problem better than humans.

The Pattern Repeats

Tesla exemplifies this pattern with self-driving cars. Starting in 2016 with a $3,000 Full Self-Driving package, they promised complete autonomy through future software updates. Despite years of development and billions of kilometers of training data, their system remains at Level 2 autonomy in 2024, still requiring human oversight.

Similarly, OpenAI began as a broad research lab exploring multiple paths to AGI. As they found success with language models, they've gradually redefined AGI to essentially mean "a better version of ChatGPT," using it more as a marketing term while lowering the bar on what AGI means.

I dived in deeper on the full post, with interview excerpts etc. on my site.

The most recent major development, the Transformer architecture powering Large Language Models like ChatGPT, has similar fundamental limitations: - They require massive amounts of training data and computing power - They generate hallucinations and nonsense information - They can't generate truly novel ideas, only replicate patterns from training data - They demand enormous computing resources for even basic operations

Path To AGI

The core problem is that modern neural networks use a static, linear architecture. Every input methodically passes through every neuron, ranking probabilities against known concepts. This differs dramatically from how biological brains work.

The human brain has 86 billion neurons, each with 10,000 connections to other neurons. Input travels through this web dynamically, activating relevant connections until reaching a result. The brain constantly evolves its structure to learn new information, changing its architecture and connections. This enables building context and understanding new concepts based on existing knowledge — something current AI cannot do.

True AGI systems will likely need: - A web of partially-connected neurons rather than linear layers - Dynamic structure that evolves with new input - Fine-tuned evolution through system "DNA" - Non-linear processing where conclusions feed back into the system

Building such systems will be challenging initially. We should start small, perhaps at mouse-level intelligence (71 million neurons), then we can scale up through cat (250m), monkey (1.3bn), and human (86bn) levels, then 10x human. Most practical AGI systems won't need human-level intelligence or beyond — construction robots can work effectively with much simpler intelligence.

AGI also offers possibilities beyond human capabilities. While humans learn only from personal experience, AGI systems could share knowledge instantly through network connections, potentially accelerating learning exponentially.

I believe AGI will be invented by a small team working in an opposing direction to where big AI companies are investing. The person most aligned with this thinking is John Carmack (known for developing Doom, Quake and Oculus), who started a research lab called Keen Technologies to develop AGI. His perspective on AGI development aligns closely with the need for fundamentally different approaches — if you're interested in learning more, I'd highly recommend watching his full interview with Lex Fridman.

While current LLMs are impressive and useful, the gap between what we have and true AGI remains vast. The path forward requires fresh research approaches rather than hoping incremental LLM progress will somehow overcome fundamental limitations. We need more conversation about genuine AGI advancements rather than marketing hype. The ideal scenario would be a healthy split between advancing current AI technologies and pursuing new research paths toward true AGI.

Read the full post, with vids and more ideas, on my site.


I'm Andrew Hart. I've invented numerous new technologies such as AR navigation, now used for precise location in Google Maps. I run a deep-tech startup called Hyper, building 1m-accurate indoor location, 5x more accurate than beacons.