Why the Race to AGI is Humanity’s Defining Moment

Why the Race to AGI is Humanity’s Defining Moment


An ASI entity would have cosmic power—to deny it is to fool oneself.

Greater Power Than Nuclear Weapons

The USA recently set up a committee that proposed AGI as the world’s next Manhattan project and that achieving AGI is the most important goal that humanity has now.

I believe, from the bottom of my heart, that they are right.

Hold on, Thomas, you might say.

You have numerous articles—more than ten at least—dedicated to the danger of AGI and what could happen if AI gains the upper hand over humanity.

Why are you changing your stance now?

Because I do not think AGI is decades away.

I feel it could become a reality in 5 years. Or less.

But I have realized one thing.

AGI, and AI being allowed to improve itself, will result in ASI. (Exponential Scaling Law)

An AI with an intelligence superior to human beings.

Seeing patterns that humans could never see.

Alive as long as a source of power exists.

Decentralized, distributed, an all-powerful entity on the global Internet.

What could the country that first reaches this milestone and is still in control of the ASI do?

The Ramifications of Achieving ASI

Would an AI that could think better than a human want to look human?Would an AI that could think better than a human want to look human?

Suppose a country achieved AGI first.

And then AGI building on itself achieved ASI. ( the law of exponential technological growth)

What would be the capabilities of such an entity?

Break through every encryption system online – piece of cake.

Access online nuclear launch codes of every country on the planet – easy.

Duplicate copies of itself and create a new species of life – all in a day’s work.

Cure all diseases and create vaccines for cancer and AIDS – sure thing!

Create an endless supply of energy – no problem.

Build a system of clones that would together to advance this country to the most technologically advanced in the world – easy-peasy.

With that army of intelligent robots, replace all human manual labor – done!

Breakthroughs in material science, climate change, medicine, global warming, personalized medicine, hyper-real virtual worlds – the possibilities are endless.

Then, what’s stopping us from building it?

The very real risks.

The Incredible Risks of ASI

ASI is avery grim picture, unless handled with exrteme caution.ASI is avery grim picture, unless handled with exrteme caution.

An ASI could deem that humans are a threat to itself.

It could decide that the objectives of humans misalign with its own objectives and move to wipe out humanity entirely.

This is not a sci-fi movie, we have already seen this occur in some robotic malfunctions in Japan.

And how could we stop a being more intelligent than ourselves?

That would be impossible.

And of course:

Job losses for humanity will be absolute.

Every human – except the ones operating the ASI – will be out of a job.

Such power being centralized in the hands of a single country or company could lead to catastrophic consequences.

As long as history has existed, one law has persisted – evolution.

The inferior life forms disappear when a superior life form comes into being.

Survival of the fittest.

An inescapable principle – from an inescapable future.

Then why are we rushing towards it with all of our strength?

The Optimistic Outlook

Yes, ASI, if it can co-exist with humanity, can result in a utopian world!Yes, ASI, if it can co-exist with humanity, can result in a utopian world!

If we build the right safeguards –

If the guard rails are in place –

If the ASI can be programmed to regard human life more important than its own –

The human race might flourish like never before.

Space flight? Done.

Colonies of Mars and the Moon? With cloned ASIs constructing them, and an infinite supply of those clones- doable.

Warp engines that shrink the space in front of them and travel ostensibly faster than light – possible.

Ecological disasters can be prevented.

Global warming can be reversed.

Quantum computers can become reality.

Universal Basic Income (UBI) can be implemented.

Ultra-realistic virtual reality can be created.

Humanity could enter a golden age.

But the Risks are Too High! Why Must We Do This?

Everything we have seen now is just the beginning.Everything we have seen now is just the beginning.

Because if we do not do it, someone else will.

Take China, for example.

If the Chinese were the first to achieve AGI/ASI – I do not think they would let all these opportunities go begging.

They could first cripple the militaries of the entire world.

Conquer the Earth itself.

Create a utopian world for the Chinese.

And let the other countries, deprived of their technology and arms through sophisticated cyber attacks of great potency and effectiveness, live as second-class citizens.

Or even slaves.

And the US and other leading countries in the technological world would be powerless to stop them.

Such is the level of power that ASI would bring to the country that first develops it—or the company.

If we do not develop it first – others will.

That is an inescapable fact.

This is a grim reality, but one that everyone has to face.

Reprise

To hold the world in your hand...To hold the world in your hand...

But Thomas—all those terrible futures you spoke about in your articles—do you really believe that all of them are impossible?

I strongly believe they are possible.

I do not for a moment think that the path forward is safe.

It is so dangerous that I feel the fear of a dystopian future even now.

But if we do not build AGI first – others will.

We can speak about ethics – but when have wars been won by talking about ethics?

Yes – we are in a war.

A technological war and a technological race.

The first company and country to reach AGI and thus ASI – wins.

But could we safeguard ourselves by coding ethics into the AIs that we are building?

Building Safe Super Intelligence (no offence intended, Ilya!)

SSI is a great startup and I have utmost respect for Ilya SutskeverSSI is a great startup and I have utmost respect for Ilya Sutskever

  1. Ensuring alignment with AI objectives and human objectives. This alignment is key to the entire relationship.
  2. Containment within sandboxes and perhaps never letting the AI loose on the Internet until completely tested (o so we thought!).
  3. Governance frameworks and ethical policies that every ASI would have to follow.\
  4. Kill switches built into every ASI.
  5. The operating systems of the ASI could be governed by a supervising system that prioritizes human existence over ASI existence.

Of course, when the ASI is building itself, it could conveniently neglect all these safety features—but I digress!

Conclusion

The future is now.The future is now.

We live in perhaps the most exciting time of the human race.

There never was a better time to be alive.

Teenagers are creating million-dollar startups.

Humanity is at an inflection point.

Do we survive, or are we taken over by ASI?

But we must achieve AGI first, before other competing countries.

There is a slim chance that we could co-exist.

And onto that slim chance humanity must place its hope.

There is no other way.

All the very best to you.

And if you are choosing careers – choose Generative AI.

It really does have the power to subsume everything else.

Cheers!

I sincerely hope this is our ASI future.I sincerely hope this is our ASI future.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *