Season 2,
45 Min

Episode 71: Artificial Intelligence, Super-Intelligence & Our Human Future

October 02, 2017

Stephen Hawking says that AI will be “either the best or the worst thing ever to happen to humanity“, and Elon Musk claims it is the “biggest risk we face as a civilisation”. So what’s so threatening about the future of Artificial Intelligence? What might happen when computers get smarter than we are?

 

This week, Clay and Sarah discuss what it might mean to be human with the rise of Artificial Superintelligence.

 

In this episode:

* What is the concern over AI?

* Is all this stuff real or just science fiction?

* What is intelligence vs Superintelligence?

* Why AI might be the last invention we will ever have to make…

* Questions from the films BladerunnerHer and Ex Machina about how humans may emotionally relate to superintelligent machines

* Should superintelligent AI be given legal rights?

* Is our fear of AI more about us than them?

 

Nick Bostrom, author of Superintelligence, has spent most of his life researching AI. In an interview, he claims, “we’re like children playing with a bomb.” He tries to explain this extreme view with the example of chimpanzees and humans. Many in the animal kingdom, including chimpanzees, tigers etc are much physically stronger than humans.

However, humans have gained power over these creatures not through physical strength but through superior intelligence. Our intelligence has transformed landscapes, and transformed the earth itself.  So, Bostrom logically concludes, why would we think we would stay in control of AI once we create Superintelligence — intelligence which is superior to human intelligence?

Sam Harris has a similiar explanation about why we should be concerned in his TED talk ‘Can We Build AI without Losing Control Over It? As we continue to improve our technology, it is only a matter of time, Harris claims, before we are building machines that are smarter than we are — that can process information much much quicker, learn much quicker, make new connections, new innovations and new inventions much much quicker than any humans.

The worry is not about manevolent machines set out to destroy us. It is more likely that machines may come to regard us as we regard ants — we allow them to live but when our goals and values diverge (i.e. When ants are causing a problem to our goal of having a clean house, or our kids playing in the garden) we annihilate them without a second thought.

We’ve all done this – whether to ants or spiders or other bugs… and we should ask ourselves why we feel differently about killing animals who are less intelligent than we are. We should ask ourselves, as intelligence in a species rises, say to the level of tiger or chimpanzee, do we feel less likely to treat them with detached disregard. And then…what does this say about how machines might regard us once they are exponentially more intelligent than us?

For me, all this reflecting on AI and what makes us human beyond our intelligence, highlights how little we still understand about Life itself. In other words, what makes one thing ‘alive’ and another thing ‘not alive’, particularly when we look at the atomically level and see the exact same atoms in both. As Max Tegmark, author of Life 3.0 on AI reminded me — from the level of physics, we are all just atoms arrange in a particular way. But if we can’t understand what makes some creatures living, how can we begin to have a conversation about the difference between intelligent machines vs humans?

Consciousness? Self-awareness? What makes us human? Perhaps that is the ultimate question AI can inspire us to answer.

 

 

 

 

Leave a Reply

Scroll to top