Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Artificial Intelligence and The Top 6 Business Risks

Who is liable for the damages if an Artificial Intelligence (AI) system screws up? 

The implementation of artificial intelligence has the potential to greatly benefit companies. However, if something goes wrong, the responsibility usually falls on the executive who supported the adoption of the technology. The consequences are expected to be shouldered by those in leadership positions. 

“Many risks associated with adopting and using AI in business are widely known, as they have been extensively covered in both academic research and consulting reports. These risks range from bias and discriminatory Risk to operational and IT risks, from business disruptions to job eliminations,” wrote London School of Economics researchers Terence Tse and Sardor Karimov. 

“Intelligent, but naive. This would be a fair description of the AI we see today,” Boris Cipot, Senior Security Engineer at Synopsys Software Integrity Group, told The Cyber Express. 

“There is no question of the power that today’s technology can bring to the table. ChatGPT is a great example of this. This particular AI tool has impressed many users and observers alike,” he pointed out.  

Even developers were impressed with how this AI tool can quickly provide explanatory algorithms or even, suggest solutions to presented code.

The text it provides is easy-to-read and well-formulated. Moreover, the explanations of certain scientific problems and terms are done in an easily digestible manner, making it fit for purpose in a tutoring or home-schooling environment.  

However, there have been cases where users gave it the task of creating code, but this could be used for questionable, maybe malicious, purposes. Accounts that have made such requests have since been blocked, as there is an admin team behind the technology that is monitoring for misuse of the technology.  

Businesses need to be aware that this technology is still young, noted Cipot.  

“Even if AI has been present in many forms through research and movies, we must concede that AI technology as it exists today isn’t running at its full potential due to a lack of practical implementations. It is clear that AI can learn quickly, but in order to do so it needs use cases.” 

AI is a rapidly developing technology that has the potential to revolutionize the way businesses operate. However, with any new technology, there are also risks that need to be considered. The Cyber Express explains six key risks that businesses may face when implementing AI technology. 

Security risks 

As AI systems become increasingly sophisticated and handle unforeseen volumes of data, they also become increasingly targeted. This can lead to the loss or theft of sensitive information, which can have serious consequences for a business. 

“Early in this battle, malware and antivirus vendors used to use hashes of files so they could successfully identify malicious code. Attackers built custom binaries for each attack or polymorphic viruses to evade hashes of known bad files,” Zane Bond, Head of Products at Keeper Security, told The Cyber Express. 

“After many years, defenders and those building the tools available to them, started identifying malicious behaviours instead of malicious files, which are much more difficult to mask,” he added. 

One of the main security risks associated with AI is the potential for hacking and data breaches. As AI systems become more integrated into business operations, they also become a more attractive target for cyber criminals.  

This is because AI systems often store and process large amounts of sensitive data, such as personal information or financial transactions. A successful hack of an AI system can lead to the loss or theft of this data, which can have serious consequences for a business, including financial loss and reputational damage. 

“I’m not sure they become increasingly vulnerable to hacks. I think it’s better to say as vulnerable to attacks, just more often targeted. There’s a difference. And it’s not like defenders will simply stand still and take the attacks. As things are more targeted, defenders will respond,” Roger Grimes, Data Driven Defense Evangelist at KnowBe4, told The Cyber Express. 

Another security risk associated with AI is the potential for unauthorized access to the system. As AI systems become more sophisticated, they also become more complex, making it more difficult to secure them.  

“In the absence of robust privacy regulations (US) or adequate, timely enforcement of existing laws (EU), businesses have a tendency to collect as much data as they possibly can. The long-standing slogan in privacy world “Dont’ collect it if you can’t protect it” stays valid and true,” Merve Hickok, Chair & Research Director at Center for AI and Digital Policy, told The Cyber Express.

This can lead to vulnerabilities that can be exploited by cyber criminals to gain access to the system. Once they have access, they can steal data, disrupt operations, or even take control of the system. 

“AI systems tend to connect previously disparate datasets. This means that data breaches can result in exposure of more granular data and can create even more serious harms,” said Hickok.

To counter the risk, businesses should first think of data minimization, she explained.

“Collect only what is legal, and what you need. Businesses should also conduct impact analysis to determine what harms are possible if the data is breached at some point, and take necessary steps to mitigate risks.”

In addition to these risks, AI systems are also vulnerable to malicious attacks, such as denial of service attacks or malware infections. These attacks can disrupt operations, cause system failures, and lead to data loss. 

“The better question to ask is does AI bring about brand-new methods of attack? And will those new types of attacks result in prolific damage?” said KnowBe4’s Grimes. 

He compares the situation to that of the cloud systems, which have the most vulnerabilities of the non-cloud systems, plus the possibilities of new attacks that can only happen in the cloud.  

For a while, there were concerns about various forms of cloud-based attacks and each time one occurred, it received significant attention.  

However, over a decade into a heavily cloud-dependent era, it has become clear that the majority of successful and damaging attacks on cloud infrastructure are due to traditional methods such as social engineering, unpatched software, misconfigurations, and overly permissive permissions, he notes.  

In contrast, attacks that are specific to the cloud have caused minimal harm.  

“So, the best question is if the new types of attacks that are only possible against AI infrastructures will result in prolific damage over the long-term or is worrying about and mitigating traditional attack types more concerning,” he added. 

Robust security measures including implementing firewalls, intrusion detection systems, and encryption to protect the system from unauthorized access and hacking are some of the most common preventive steps researchers suggest.  

Conduct regular risk assessments to identify vulnerabilities in the system and update software and hardware regularly to address any security issues that are identified to improve the efficiency of these steps. 

Equally important is providing training for employees on cybersecurity best practices and to create a culture of security throughout the organization. This includes implementing policies and procedures to ensure that sensitive data is protected and that employees are aware of the risks associated with AI systems. 

“The more realistic threat from these artificial intelligence tools is the opportunity for bad actors with limited resources or technical knowledge to attempt more of these attacks at scale,” said Keeper Security’s Bond. 

“Not only can the tools help bad actors create content such as a believable phishing email or malicious code for a ransomware attack, but they can do so quickly and easily. The least-defended organizations will be more vulnerable as the volume of attacks will likely continue to increase.” 

Bias and discrimination 

AI systems are only as good as the data they are trained on, and if the data contains biases, the AI system will also be biased. This can lead to discriminatory decisions and outcomes, which can have serious legal and reputational consequences for a business.  

For example, biased AI systems can perpetuate and even amplify existing societal biases such as racism, sexism, and ageism, leading to unfair treatment of certain groups of people. 

One of the main sources of bias in AI systems is the data that is used to train them. Data sets that are used to train AI systems are often created by humans and can contain biases that reflect the prejudices and stereotypes of the people who created the data. 

For example, a data set used to train an AI system to recognize faces may contain more images of people with lighter skin tones than of people with darker skin tones, which can lead to the system having difficulty recognizing people with darker skin. 

“The average share of employees on AI teams at respondents’ organizations who identify as women is just 27 percent; the share is similar among the average proportion of racial or ethnic minorities: 25 percent,” said a McKinsey survey on the AI tech-talent landscape, published on January 20, 2023. 

“Diverse and inclusive perspectives are especially critical in AI to prevent issues of bias in datasets and models, and distrust in outcomes.” 

Another source of bias in AI systems is the algorithms that are used to train them. Algorithms can be designed in such a way that they perpetuate biases that are present in the data.  

For example, an algorithm used to predict which applicants are likely to default on a loan may be based on historical data that shows that certain groups of people are more likely to default than others. This can lead to the algorithm unfairly denying loan applications from people who belong to these groups. 

“Several early AI examples showed how they were accidentally biased by their creators or were intentionally biased by the community. I think bias is something we should have been more concerned about in our traditional, non-AI, systems, but were not,” said KnowBe4’s Grime. 

Researchers and management consultants encourage businesses to ensure that their data is as diverse and representative as possible and that the algorithms that are used to train the systems are designed to minimize bias.  

According to Hickok, organizations need to take responsibility and accountability regarding the quality of your data, model, governance methods.  

“If, as a business, you are not developing and governing your AI systems responsibly, then you are short-sighted. You prefer short-term profit and hype over long-term sustainability and resiliency of your business,” said Hickok.  

“For some people, this fast churn of business, say rise quick, fall quick, can be acceptable. However, for those who truly care about brand growth, loyalty and interest of customers, and society in general, then you need to think of the impact you are creating.”  

Regular assessment of the performance of AI systems helps identify and address any biases that may be present according to them. 

Another important step is to provide transparency in the decision-making process of AI systems. This includes providing clear explanations for how decisions are made by the system and making it easy for people to understand the reasoning behind the system’s decisions. This can help to build trust in the system and reduce the risk of discrimination. 

Job displacement 

Microsoft said in mid-January that it would fire 10,000 employees, or less than 5% of its workforce, globally. The US-based company has 18,000 employees on its payroll in India alone. The lay-off plan was announced days after reports came that Microsoft is in talks to invest $10 billion in ChatGPT-owner OpenAI as part of funding that will value the firm at $29 billion. 

AI has the potential to automate tasks and jobs, which could lead to job losses and workforce displacement. While this can bring cost savings for businesses, it can also lead to social and economic disruption, particularly in areas where a large proportion of the workforce is at risk of being replaced by AI.  

AI will surely result in job loss and even the extinction of entire industries, but that is not a reason to stop it, according to KnowBe4’s Grimes. 

“What we need to do is train and re-train people so they are prepared for the new types of jobs. We don’t do a great job of that, but we need to. There would be less personal and societal pain if we did a better job.” 

Most of the wildly ranging predictions about how many jobs will be lost as a result of AI systems has to do with mistaking process automation with AI systems, Hickok pointed out.

“I do not see AI as a serious threat to jobs per se. However it is definitely changing the way we do our work, shaping our behavior as we interact with these systems,” she said.

“There is definitely an impact on white and pink collar jobs, but I think the impact in those jobs will be more about using outcomes of AI systems for decision-making or delivery of work, rather than AI completely replacing the human currently doing the job.”

Dependence on AI vendors:  

As companies increasingly rely on AI systems, they also become more dependent on the vendors that provide them.  

One of the main risks associated with AI is the potential for dependence on a single vendor. As companies increasingly rely on AI systems to drive their operations, they also become more dependent on the vendors that provide those systems. This dependence can lead to a number of risks, including: 

Vendor lock-in: As companies become more dependent on a specific vendor’s AI system, it becomes increasingly difficult for them to switch to a different vendor. This can lead to a loss of bargaining power and result in higher prices or less favorable terms from the vendor. 

Lack of flexibility: Companies that rely heavily on a single vendor’s AI system may be less able to adapt to changes in the market or in their own operations. This can limit their ability to respond to new opportunities or challenges. 

Risk of vendor failure: Companies that rely heavily on a single vendor’s AI system may be at risk if that vendor goes out of business or is unable to continue providing the system. This can lead to disruptions in operations and even to the failure of the company. 

“This isn’t an AI problem, it’s a responsibility problem. The responsibility for security cannot be delegated with the responsibility for doing when AI is put in charge of a task,” “Jamie Boote, associate principal consultant at the Synopsys Software Integrity Group, told The Cyber Express. 

“Delegating to AI will shift ownership from experienced employees who have experienced more edge cases than would have been trained into an AI. The risk in this situation would be akin to giving responsibility to a human employee to inexperienced to make the right call or account for risks,” he added. 

To mitigate these risks, businesses must diversify their AI vendors and ensure that they have the ability to switch to a different vendor if necessary. This can be achieved by: 

Using multiple vendors: Instead of relying on a single vendor, businesses can use multiple vendors to provide different parts of their AI system. This can reduce the risk of vendor lock-in and provide a level of redundancy in case one vendor is unable to continue providing the system. 

Building in-house capabilities: Companies can also invest in building their own AI capabilities in-house. This can provide them with more flexibility and control over their AI systems and reduce their dependence on vendors. 

Having a backup plan: Businesses should also have a backup plan in case their vendor is unable to continue providing the system. This could include having multiple vendors as a backup or having the capability to switch to a different system quickly. 

Lack of transparency 

AI systems have traditionally been termed “black boxes” that make it difficult or impossible for businesses to understand how the system arrived at a particular decision.  

As AI systems become more sophisticated, they also become more complex and opaque, making it difficult to understand how they arrive at certain decisions. This lack of transparency can lead to a number of risks for businesses.  

One of the main risks associated with AI is the lack of transparency in the decision-making process. Many AI systems, particularly those using deep learning techniques, are difficult or impossible for humans to understand how the system arrived at a particular decision.  

Wrong decisions accentuate the situation, notes the LSE report by researchers Terence Tse and Sardor Karimov. 

“Many organizations, especially public ones, have no other choice than to get the technology right the first time. Any purchase decision must be turned into a future success. This is because failure to get the technology to deliver what was promised is likely be construed as managerial incompetence or worse, a misuse of public funds,” said the report. 

Under public scrutiny, any outcome short of the anticipated results being fully met would potentially lead to reputational if not legal damages, they note. Financial services companies, for instance, can also face the same decision-making risk.  

“Take the example of a bank that is contemplating using AI to cut the costs related to due diligence and meet ever-tightening compliance regulations. It can be imagined that if the AI technology does not work properly, the bank may end up facing costly financial consequences in the form of penalties, compensation, and remedial actions,” the report said. 

The black-box situation already exists in the present systems and services, KnowBe4’s Grimes points out. 

“Do you know Google’s search algorithms? Do you know how YouTube recommends new videos? No. Unless you wrote or reviewed the code or algorithm involved in the software, service, or device you are using, you’re already living with lots of black boxes,” he said 

“It’s interesting that AI is bringing some of the most important issues, like transparency and bias, to the forefront when they are the exact same issues inherent in everything we use every day.” 

Regulation and legal risks 

With the rapid development of AI, laws and regulations related to the technology are still evolving. Regulation and legal risks associated with AI refer to the potential liabilities and legal consequences that businesses may face when implementing AI technology. These risks can arise from a variety of sources, including: 

Compliance with laws and regulations: As AI becomes more prevalent, governments and regulators are starting to create laws and regulations that govern the use of the technology. Businesses may be subject to new regulations or laws related to data privacy, data security, and bias in AI systems. Failure to comply with these laws and regulations can result in legal and financial penalties. 

Liability for harms caused by AI systems: Businesses may be held liable for harms caused by their AI systems. For example, if an AI system makes a mistake that results in financial loss or harm to an individual, the business may be held liable. 

Intellectual property disputes: Businesses may also face legal disputes related to intellectual property when developing and using AI systems. For example, disputes may arise over the ownership of the data used to train AI systems or over the ownership of the AI system itself. 

Human rights violations: AI systems have the potential to violate human rights, such as freedom of expression and privacy. Businesses may face legal consequences if their AI systems are found to be in violation of human rights laws. 

Ethical issues: As AI systems become more sophisticated and integrated into our daily lives, ethical issues also arise. For example, issues related to autonomous weapons, decision-making systems, and the use of surveillance technology. 

“We cannot guarantee the system will not discriminate against certain groups of people. However, the decision to use a blackbox system is a human decision. We should not use such systems where there is risk to fundamental rights and rule of law, where we might undermine human dignity and autonomy,” said Hickox.

“Therefore there should be controls in place for certain use cases, especially for AI systems used by public agencies, which only allow for explainable systems.”

The root of all these risks lies in the fact that we are yet to teach this technology what we want, in a regulated fashion, noted Boris Cipot of Synopsys Software Integrity Group. 

“ChatGPT would be a great tool for a pen tester who needs a base script for a certain task, which he can then build on and make his job faster. The job this pen tester chooses to undertake, however, could be hacking or breaking into systems. How will ChatGPT be able to decide who it should help?” he said. 

“This lack of experience – the naive side of AI – still needs to develop in order to be, not just powerful, but safe.” 



This post first appeared on The Cyber Express, please read the originial post: here

Share the post

Artificial Intelligence and The Top 6 Business Risks

×

Subscribe to The Cyber Express

Get updates delivered right to your inbox!

Thank you for your subscription

×