Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

The Existential Risk of Artificial Intelligence in 2023

The Existential Risk of AI: Safeguarding Lives and Society

In an increasingly digitized world, the rapid advancement of artificial intelligence (AI) has captured the attention of technologists, scholars, and industry leaders. One such figure, the former Chief Executive Officer (CEO) of Google, Eric Schmidt, has recently voiced concerns about the potential Existential Risk associated with AI.

Photo: CNBC

This article delves into the profound implications of AI and emphasizes the critical need for responsible AI development.

But first:

What is an Existential Risk, Exactly?

"Existential Risk is a Risk Event whose severity is so extreme that it threatens the very existence of humanity or at least its ability to organize the complex civilisation structures currently observed. While the likelihood of such risks might be exceedingly small compared to risks commonly identified and managed, the possibility of dramatic impact warrants principled investigation and (where possible) mitigating measures.

The risk alone is an event. Unlike the hydrogen bomb which also poses an existential risk, the number of players in this game and their intentions are unknown.

When CEOs of the most prominent AI firms are talking Doomsday...

it means either 1) they don't know what they are doing or 2) major liability is heading their way.

The danger lies in the technology becoming bigger than can be managed, but no management or regulation yet exists, and those in the know may not reveal what they discover. With any new technology, hiding discoveries creates an advantage but also deception. AI is like the prisoner's dilemma on steroids with an evil robot.

Understanding the Nature of Current AI

Artificial intelligence refers to the development of computer systems that can perform tasks that typically require human intelligence. These tasks range from simple automation to complex decision-making processes. The objective behind AI is to enable machines to learn, reason, and adapt, mimicking human cognitive abilities to enhance efficiency and productivity in various domains. Here's the biggest problem: perhaps no one understands the current nature of AI according to Google engineers. It's like a black box. We don't need to know how it works to use it. And that's what they've made.

Deepmind, started in 2011, is a black box and few are aware it is already beating humans at every challenge put to it. For example, It navigated the streets of Paris with no maps or GPS. It learned to walk on its own, and can produce images with no training. That is scary. What people think of as AI (ChatGPT) is nothing compared to what Deepmind can do.

The Cautionary Words of the Former Google Employees

In a thought-provoking statement, the former CEO of Google adds himself to a growing list of people emphasizing that AI poses an existential risk that could potentially endanger lives and disrupt an essential societal balance irrevocably.

This warning underscores the significance of addressing the ethical and safety aspects of AI, prompting a deeper examination of the potential risks associated with its uncontrolled and unregulated growth. The reason for alarm: this is not the first thought leader in this field to make such an admonition.

Geoffrey Hinton, one of the pioneers of Deep Learning and a former Google employee, left the company saying his warning would have more credibility if he were not a Google employee. What the public is receiving, he says, is a flood of misinformation, and the public, even the experts developing it, have no good idea what makes it work or how intelligent it already is.

Photo: AP

The 'Godfather of AI' Says We Have No Idea How Bad It May Get

Analyzing the Existential Risk

1. Unintended Consequences: As AI systems become more sophisticated and autonomous, the risk of unintended consequences arises. Machines lacking comprehensive ethical frameworks may inadvertently make decisions that have severe repercussions on human lives and well-being. Until a disaster occurs little will be done say opponents of regulation.

This is the biggest one. Even the most well-intentioned researchers have little idea what the effects will be and do not fully understand that what they are doing will have repercussions that can't be anticipated this early in the game. The best intentions are worthless if the technology is stolen and misused.

2. Security Breaches: With the increasing reliance on AI in critical sectors such as healthcare, transportation, and finance, vulnerabilities in AI systems can lead to security breaches and cyber-attacks. Malicious actors could exploit these vulnerabilities, compromising personal data, and infrastructure, or even weaponizing AI technology. Compounding obscure infantile technologies together, especially those with this much potential for harm is, well, risky.

  • Consider Nueralink. It's a brain-computer interface.

This new technology is not regulated in any meaningful way. These two technologies will arise together, both with no clear objective or regulation. Both are wait-and-see projects.

  • What if you get hacked? What if AI hacks you? What if an invading army, terrorist, or your own government hacks you? What if an EMP detonates and fries your brain?

If AI is an existential crisis, the combination of AI and Neuralink is absolutely befuddling. Bioethics is reaching an apex because the uncertainty of all of these critical areas raises ethical concerns that can only be waxed about philosophically. The potential for gain makes the risks invisible. Risk and reward are separated by outcomes that haven't materialized.

These two major areas in technology will collide and the potential risk grows with both.

As one security expert revealed to no one's surprise, the security for these new BCI chips must be SOLID. (Since lowercase letters wouldn't mean as much, the expert was quoted in all caps.) Security expert or not, to just say it must be SOLID proves there's a lot even the experts just pretend to understand.

Photo: Neuralink

3. Job Displacement: The advancement of AI-driven automation has raised concerns about job displacement. As AI systems continue to evolve, certain industries may experience significant disruptions, leading to unemployment and socioeconomic challenges. This was happening without AI, and there's been time to make adjustments so hearing this as a reason repeatedly gets old. No one is going to regulate AI chiefly out of concern about unemployment. So far only Wendy's (fast food) has committed to going full robot employee.

4. Bias and Discrimination: AI algorithms are trained on vast amounts of data, and if the data itself contains biases, the AI systems must perpetuate and amplify these biases. This can result in discriminatory outcomes, such as biased hiring practices or prejudiced decision-making in criminal justice systems. This can create a system that creates social strata and division systematically.

Promoting Responsible AI Development

Interdisciplinary Collaboration: Promoting collaboration between technologists, policymakers, ethicists, and social scientists fosters a holistic understanding of AI's implications. Such collaboration facilitates the development of comprehensive frameworks and regulations that protect society's best interests. In fact, the only new takeaway from this announcement is the warning to work with China on AI instead of against it. If cooperation and collaboration aren't international or kept open-source (OpenAI is pulling back on its open-source offerings out of fear of open-source competition) we may already be too late. Elon Musk, for example, can't stop saying it's too late already but that's not stopping him either.

The warnings voiced by the former Google CEO highlight the need for responsible AI development to safeguard lives and protect society. As AI technology advances, it is imperative that we prioritize transparency, ethical guidelines, continuous monitoring, and interdisciplinary collaboration. By doing so, we can harness the potential of AI while mitigating the existential risks it poses. The responsible development and deployment of AI however is relegated to "catch security holes as they arise" or "wait for meaningful harm" as some have suggested, we are in mass denial, counting down to a crisis in the making while being amused with watered-down chatbots, evil people are doing evil daily.

I hope we're prepared by then.

The post The Existential Risk of Artificial Intelligence in 2023 first appeared on Gadget Enclave.



This post first appeared on GadgetEnclave, please read the originial post: here

Share the post

The Existential Risk of Artificial Intelligence in 2023

×

Subscribe to Gadgetenclave

Get updates delivered right to your inbox!

Thank you for your subscription

×