Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

AI success for business depends on ethics

AI success for business depends on ethics

By Dr Ian Peters, MBE is the Director of the Institute of Business Ethics, which has published guidance on the ethical use of artificial intelligence.

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

That quote from Professor Stephen Hawking neatly encapsulates the discussion and debate over Artificial Intelligence. It can be a force for good and for technological advancement but comes with inherent risks that we don’t and can’t fully understand yet.

Such discussions often gravitate more towards the potential risks. It was Alan Turing who said we should expect the machines to take control, and more recently Elon Musk who said “The risk of something seriously dangerous happening is in the five-year timeframe. Ten years at most.” So, inevitably debate focuses on the long-term consequences of trying to develop AI. Can we harness its potential, or are we paving the way for our own destruction?

It is a vital discussion, one that will potentially frame all our lives in the years to come. But in looking to the horizon, there is the risk we ignore the here and now. Businesses across the country and around the world are already employing AI tools, or considering how they can be incorporated into ways of working to drive cost-effectiveness, deliver efficiencies and boost productivity.

Its use raises important ethical issues – so how can companies adopting or considering the adoption of AI ensure it is deployed responsibly, appropriately and ethically?

It’s an issue the Institute of Business Ethics takes extremely seriously, recognising the transformative impact AI could have on the workplace – and on the workforce. Working together with businesses, the Institute has developed five principles that should underpin any company’s adoption of AI. These focus on: Purpose, Accountability, Transparency, Fairness and Safety.

The use of AI should always be clearly defined and aligned with an organisation’s values and purpose. There must be accountability – humans should always be in charge, and companies should allocate who is responsible to ensure oversight and control. The use of artificial intelligence must be transparent. Public trust in AI tools is essential and any organisation should be able to explain when and how AI is used to aid a decision, especially one impacting on humans.

Fairness is key. AI has incredible potential, but its use should not undermine the rights of individuals, whether staff or customers. This means businesses should continually audit AI’s use to identify new biases and ensure diversity and inclusion are built into it through training, data input and deployment.

Finally, it goes without saying that safety should be paramount; companies must protect AI systems against data breaches – by undertaking pre-deployment testing, identifying risks and continually monitoring how AI is used and its outputs.

By following these core principles, companies can ensure AI is used ethically – protecting and reassuring both staff and customers. This in turn means there are straightforward and important steps businesses should be taking to embed these principles.

Any company employing artificial intelligence should have a designated AI lead, appointed by the chief executive and with direct access to the board of directors. Having someone in that position of responsibility with the ability to rapidly elevate concerns helps companies mitigate reputational, legal and regulatory risks.

Whilst there should be a dedicated AI lead, businesses should also create and incorporate an ethics committee specifically for oversight of the use of artificial intelligence. Ensuring ethical practice cannot and should not be the responsibility solely of an individual – it is a shared burden, and an AI ethics committee will ensure companies bring together all the requisite experience – legal, ethical, engineering, business strategy and development, and public representatives should be considered to ensure mitigation of risk and responsible practice. This also helps embed the reality that the use of AI must be an all-company issue.

Companies should consider implementing an AI ethics framework, relating it to the business’s values and code of ethics. This can be accompanied by ethics toolkits that guide responsible implementation and maintain regular reviews of working practice, risk assessment and auditing.

Communication is key. If the use of AI is to be an all-company issue, then articulating how it will be incorporated into working practices and what that means for not only employees but supply chains is pivotal. That requires comprehensive training but also that partners are following similar principles.

Finally, driven by the AI lead and ethics committee, the use of artificial intelligence must address legal requirements.

We will see more and more companies adopt AI tools in the coming months and years. The drive for innovation should be welcomed but cannot be pursued blindly. Alongside the adoption of artificial intelligence, we may also see companies failing if they don’t bring in tools in a responsible and ethical way, explaining their use, benefits and risks to customers and staff, and democratising the process.

But, by adopting sensible principles and practices, businesses can ensure they experience the benefits of artificial intelligence, mitigating the risk of something dangerous happening as Elon Musk predicted.



This post first appeared on Technologydispatch, please read the originial post: here

Share the post

AI success for business depends on ethics

×

Subscribe to Technologydispatch

Get updates delivered right to your inbox!

Thank you for your subscription

×