Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

PROFESSOR STEPHEN HAWKING, Prophet of Doom: AI, The Matrix, Killer Robots & Alien Overlords…

Human beings aren’t just getting greedier, but stupider. And it’s happening just in time for us to experience the AI takeover, the alien invasion and the end of the earth, among other grim spectacles awaiting us.

That’s according to Professor Stephen Hawking: and, really, it doesn’t seem like a particularly shocking statement or commentary. I’m not even entirely sure when it started to happen – but we appear to be fast-heading towards the comedy-Dystopian future envisioned in the cult movie Idiocracy.

For anyone who’s never seen Idiocracy, it depicts a future in which mankind is so stupid that when a time-displaced person from the present-day gets stuck there he immediately becomes the most intelligent person on the planet – a saviour figure who has to become the source of all decisions and guidance for a human society that has lost all of its intellectual capacity.

Professor Hawking might’ve been being tongue-in-cheek with his observations – he did have a wry sense of humour, as evidenced by his cult contributions to things like Futurama, The Simpsons, The Big Bang Theory and Star Trek: The Next Generation. But there’s also no doubt that he was making a serious point too.

In an interview with Larry King on the terrific Larry King Now talk show last year (which airs on RT in the UK), Professor Hawking talked about the increasing greediness and stupidity being the biggest threats to humanity’s survival, arguing that human beings are becoming stupider and greedier by the day.

And that this is going to push humanity towards extinction-level crises earlier than once predicted.

Hawking had in recent years, particularly the last year, become something of a prophet of doom, to the extent that he was even starting to get lightly mocked for his frequent warnings of catastrophe.

Recently Hawking proposed human beings may only have as little as 100 years left on the planet.

His view of the human race’s situation seems to have become so grim that he was advocating the human race’s escape from the planet Earth – that mankind should begin moving into space as soon as possible to provide the option for a degree of human survival even if Earth-based civilisation itself doesn’t survive.

He warned that if humans don’t grow into an inter-planetary race soon and settle on other worlds, our species could die out within the next century.

Last year, Professor Hawking was very forthright with his warnings about the development of artificial intelligence and robots, warning that AI is going to quickly reach a point at which it will become a new form of life, entirely capable of outgrowing and outperforming human beings – with the likelihood that it might one day seek to replace us entirely. “I fear that AI may replace humans altogether,” he said. “If people design computer viruses, someone will design AI that improves and replicates itself.”


Whether the professor was envisioning a scenario like in The Matrix movie isn’t clear – though that kind of eventuality is certainly one way to interpret his commentary. In 2014, he said that AI was our “worst mistake in history”.


As in The Matrix movie mythology, the idea is that advanced AI could completely outsmart us before we’ve even figured out what’s going on. “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand…”

Moving away from Matrix territory and more into the territory of the Terminator mythology, Hawking also put his name to a letter by the Future of Life Institute, calling for a prohibition against the development of autonomous weapons that are “beyond meaningful human control”. The fear, shared by numerous scientists and experts who shared his apprehension, is that we’re not far away at all from the development of autonomous systems (militarised AI or robotic warfare) in battlefield scenarios – a development described by some as the ‘Third Revolution in Warfare’, the first two being the invention of gunpowder and the invention of nuclear weapons.

In the last couple of years, Hawking also drew a lot of attention for warning about the dangers of contact with extra-terrestrial intelligences, warning that we – as a species – should be wary of seeking out alien races, who could easily be hostile. More than that, he argued that intelligent or advanced alien civilisations would not think much of us, as we would be a primitive people to them. “If aliens visit us, I think the outcome would be much as when Columbus landed in America,” he said, during the Into the Universe series on the Discovery Channel, “which didn’t turn out well for the Native Americans.”


He said elsewhere, “If you look at history, contact between humans and less intelligent organisms have often been disastrous from their point of view, and encounters between civilisations with advanced versus primitive technologies have gone badly for the less advanced.”


Some of Professor Hawking’s recent statements or warnings had not been well received by various commentators or researchers and in some cases he was accused of fearmongering, apeing cliched science-fiction ideas, or even attention-seeking. However, it could just as easily be argued that he was providing a counter-balance – after all, Stephen Hawking was hardly a Luddite or an anti-scientific mind, so his warnings are not motivated by the same things that motivate most of the anti-science trends that currently proliferate on conspiracy-based online commentary in particular.

However, while it’s impossible to know whether he was wrong or right in specific predictions (we won’t be able to know for a while yet), some of the general warning is hard to argue with.

That we need to be highly circumspect and cautious about advancing Artificial Intelligence should be obvious. I would suggest that the combination of increasingly dumbed-down human societies and increasingly advanced AI would make it seem almost inevitable that humans – flawed, emotional, greedy, fat, temperamental – will eventually cede more and more agency and responsibility to the more efficient intelligence of AI.

If that ends up being the trajectory, then we would conceivably end up with our entire fate resting entirely in the hands of AI. From that point on, The Matrix scenario could become much more likely.

It isn’t much of a leap – just think about how dependent we already are on the Internet, computers and mobile phones. And then think how dependent we might one day be on much more advanced and all-encompassing AI.

In terms of our own survival as the dominant species of the planet, the key would have to be finding – and carefully maintaining – some kind of agreed-upon equilibrium between human agency and AI, where we avoid reaching the point where we are entirely dependent on AI.

I think that’s the gist of what Hawking was warning about: the problem is, if our total dependency on the Internet (in the space of a mere 15 years or so) is anything to go by, that’s precisely what is destined to go wrong – we WILL end up entirely dependent on AI.

On Hawking’s view on the dangers of alien contact, I have always thought that it’s incredibly dangerous to court potential extra-terrestrial powers or races – because any ET power capable of interstellar travel is, by necessity, technologically superior to us and would view us as primitive.

And, as Hawking points out himself, if the history of human societies and interactions is anything to go by, advanced societies have a habit of abusing less-advanced societies. His Native-American analogy seems apt: I would also suggest that a truly advanced ET power might view us the way the British Empire viewed India.


In terms of establishing contact between mankind and some as-yet-unknown alien civilisation, it is entirely 50/50 as to whether that civilisation would be benevolent or malevolent.


In terms of Professor Hawking advocating our migration from the planet and our becoming a space-faring civilisation, that seems like it has to inevitably be a later stage in our evolution (whether or not it has anything to do with the planet becoming untenable for us), there are causes for concern.

It might be tempting to immediately envision it as some idyllic Star Trek situation where a peaceful, unified human civilisation ventures forth into space: but, again, given our track record, it might just as easily end up looking more like the movie Elysium – in which the wealthy elites live in luxury out in space, while the vast mass of lower-class humanity is left to slug it out and fight for scraps in abject conditions on the Earth.


It is arguably a rather grim picture Professor Hawking was painting of our future: but, in fairness, he was simply sounding the warning bells and trying to highlight potential pitfalls ahead. It is also worth nothing that Professor Hawking – as brilliant as he was – wasn’t omniscient and was capable of being wrong: and of happily admitting having been wrong.

The much-hyped ‘Black Hole War‘ between Hawking and Leonard Susskind was won – according to most observers – by Susskind. And so Hawking altered his view of black holes.

I never finished reading A Brief History of Time when I was a teenager (I found the writings of Asimov and Carl Sagan much more accessible): I’ll need to attempt it again as a grown-up. I’m certain it is his contribution to scientific thinking and ideas that Professor Hawking would most likely to be remember for – and not his over-hyped warnings of imminent doom.

Still, those warnings should be taken seriously.


A longer version of this article can be found here at THE BURNING BLOGGER OF BEDLAM.




This post first appeared on The Brooding Blogger Of Bespin, please read the originial post: here

Share the post

PROFESSOR STEPHEN HAWKING, Prophet of Doom: AI, The Matrix, Killer Robots & Alien Overlords…

×

Subscribe to The Brooding Blogger Of Bespin

Get updates delivered right to your inbox!

Thank you for your subscription

×