Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

AI Is the Future of Cybersecurity, for Better and for Worse

In the near future, as artificial intelligence (AI) systems become more capable, we will begin to see more automated and increasingly sophisticated social engineering attacks. The rise of AI-enabled cyberattacks is expected to cause an explosion of network penetrations, personal data thefts, and an epidemic-level spread of intelligent computer viruses. Ironically, our best hope to defend against AI-enabled hacking is by using AI. But this is very likely to lead to an AI arms race, the consequences of which may be very troubling in the long term, especially as big government actors join the cyber wars.

My research is at the intersection of AI and Cybersecurity. In particular, I am researching how we can protect AI systems from bad actors, as well as how we can protect people from failed or malevolent AI. This work falls into a larger framework of AI safety, attempts to create AI that is exceedingly capable but also safe and beneficial.

Insight Center

  • Getting Cybersecurity Right
    Sponsored by Accenture
    Safeguarding your company in a complex world.

A lot has been written about problems that might arise with the arrival of “true AI,” either as a direct impact of such inventions or because of a programmer’s error. However, intentional malice in design and AI hacking have not been addressed to a sufficient degree in the scientific literature. It’s fair to say that when it comes to dangers from a purposefully unethical intelligence, anything is possible. According to Bostrom’s orthogonality thesis, an AI system can potentially have any combination of intelligence and goals. Such goals can be introduced either through the initial design or through hacking, or introduced later, in case of an off-the-shelf software — “just add your own goals.” Consequently, depending on whose bidding the system is doing (governments, corporations, sociopaths, dictators, military industrial complexes, terrorists, etc.), it may attempt to inflict damage that’s unprecedented in the history of humankind — or that’s perhaps inspired by previous events.

Even today, AI can be used to defend and to attack cyber infrastructure, as well as to increase the attack surface that hackers can target, that is, the number of ways for hackers to get into a system. In the future, as AIs increase in capability, I anticipate that they will first reach and then overtake humans in all domains of performance, as we have already seen with games like chess and Go and are now seeing with important human tasks such as investing and driving. It’s important for business leaders to understand how that future situation will differ from our current concerns and what to do about it.

If one of today’s Cybersecurity Systems Fails, the damage can be unpleasant, but is tolerable in most cases: Someone loses money or privacy. But for human-level AI (or above), the consequences could be catastrophic. A single failure of a superintelligent AI (SAI) system could cause an existential risk event — an event that has the potential to damage human well-being on a global scale. The risks are real, as evidenced by the fact that some of the world’s greatest minds in technology and physics, including Stephen Hawking, Bill Gates, and Elon Musk, have expressed concerns about the potential for AI to evolve to a point where humans could no longer control it.

When one of today’s cybersecurity systems fails, you typically get another chance to get it right, or at least to do better next time. But with an SAI safety system, failure or success is a binary situation: Either you have a safe, controlled SAI or you don’t. The goal of cybersecurity in general is to reduce the number of successful attacks on a system; the goal of SAI safety, in contrast, is to make sure no attacks succeed in bypassing the safety mechanisms in place. The rise of brain-computer interfaces, in particular, will create a dream target for human and AI-enabled hackers. And brain-computer interfaces are not so futuristic — they’re already being used in medical devices and gaming, for example. If successful, attacks on brain-computer interfaces would compromise not only critical information such as social security numbers or bank account numbers but also our deepest dreams, preferences, and secrets. There is the potential to create unprecedented new dangers for personal privacy, free speech, equal opportunity, and any number of human rights.

Business leaders are advised to familiarize themselves with the cutting edge of AI safety and security research, which at the moment is sadly similar to the state of cybersecurity in the 1990s, and our current situation with the lack of security for the internet of things. Armed with more knowledge, leaders can rationally consider how the addition of AI to their product or service will enhance user experiences, while weighing the costs of potentially subjecting users to additional data breaches and possible dangers. Hiring a dedicated AI safety expert may be an important next step, as most cybersecurity experts are not trained in anticipating or preventing attacks against intelligent systems. I am hopeful that ongoing research will bring additional solutions for safely incorporating AI into the marketplace.



This post first appeared on 5 Basic Needs Of Virtual Workforces, please read the originial post: here

Share the post

AI Is the Future of Cybersecurity, for Better and for Worse

×

Subscribe to 5 Basic Needs Of Virtual Workforces

Get updates delivered right to your inbox!

Thank you for your subscription

×