Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Generative AI Ethics

David SweenorFollowTowards Data Science--ListenShareWith all the hubbub surrounding generative artificial intelligence (AI), there are an increasing number of unanswered questions about how to implement this transformative technology responsibly. This blog will review the European Union (EU) AI ethics guidelines and discuss key considerations for implementing an AI ethics framework when large language models (LLMs) are used.On 8th April 2019, the European Union put into practice a framework for the ethical and responsible use of artificial intelligence (AI). The report defines three guiding principles for building trustworthy AI:For multinational corporations, this raises an interesting question of how they should apply this framework across geopolitical boundaries since what is considered lawful and ethical in one region of the world may not be in another. Many companies take the most stringent regulations and apply that unilaterally across all geographies. However, a “one-size-fits-most” approach may not be appropriate or acceptable.The EU’s framework can be seen below in Figure 1.1.Figure 1.1: AI Ethics Framework from the European UnionBased on the three foundational principles, four ethical principles and seven key requirements result. The ethical principles include:This leads us to the seven requirements:While these principles seem intuitive on the surface, there is “substantive divergence in relation to how these principles are interpreted; why they are deemed important; what issue, domain or actors they pertainto; and how they should be implemented.”[7]Now that we understand the EU AI Ethics guidelines, let’s delve into unique considerations for LLMs.In a previous blog titled GenAIOps: Evolving the MLOps Framework, I outlined three key capabilities of generative AI and LLMs which include:● Content Generation: Generative AI can generate human-like quality — including text, audio, images/video and even software code. Now, one should note that content generated may not be factually accurate — the onus is on the end-user to make sure the generated content is true and not misleading. Developers need to make sure that the code generated is free of bugs and viruses.● Content Summarization and Personalization: The ability to sift through large corpora of documents and quickly summarize the content is a strength of generative AI. In addition to quickly creating summaries of documents, emails and Slack messages, Generative AI can personalize these summaries for specific individuals or personas.● Content Discovery and Q&A: Many organizations have a significant amount of content and data scattered across their organization in different data silos. Many data and analytics vendors are using LLMs and generative AI to automatically discover and connect disparate sources. End-users can then query this data, in plain language, to comprehend key points and drill-down for more detail.Given these various capabilities, what factors do we need to consider when creating an AI Ethics Framework?Since generative AI can essentially produce content autonomously, there’s a risk that human involvement and oversight may be reduced. If you think about it, how much email spam do you receive daily? Marketing teams create these emails, load them into a marketing automation system and push the “Go” button. These run on auto-pilot and often times, are forgotten and run in perpetuity.Given that generative AI can produce text, image, audio, video and software code at breakneck speeds — what steps can we put in place ‌to make sure that there is a human-in-the-loop; especially in critical applications? If we’re automating healthcare advice, legal advice and other more “sensitive” types of content, organizations need to think critically about how they can keep their agency and oversight over these systems. Companies need to put safeguards in place to ensure that the decisions being made align with human values and intentions.It is well known that generative AI models can create content that is unexpected or‌ even harmful. Companies need to rigorously test and validate their generative AI models to make sure they are reliable and safe. Also, if the generated content is erroneous, we need to have a mechanism in place to handle and correct that output. The internet is full of horrible and divisive content and some companies have hired content moderators to try and review suspicious content, but this seems like an impossible task. Just recently, it was reported that some of this content can be quite a detriment to ones mental health (AP News — ​​Facebook content moderators in Kenya call the work ‘torture.’ Their lawsuit may ripple worldwide.)Generative AI models were trained on data that was gathered from across the internet. Many of the LLM makers do not really disclose the fine details of what data was used to train the model. Now, the models could have been trained on sensitive or private data that should not be publicly available. Just look at Samsung who inadvertently leaked proprietary data (TechCrunch — Samsung bans use of generative AI tools like ChatGPT after April internal data leak). What if generative AI generates outputs that include or resemble real, private data? According to Bloomberg Law, OpenAI was recently serveed a defamation lawsuit over a ChatGPT hallucination.We can certainly say that companies need to have a detailed understanding of the sources of data used to train generative AI models. As you fine tune and adapt your models using your own data, it is within your power to either remove or anonymize that data. However, you could still be at risk if the foundation model provider used data that was inappropriate for model training. If this is the case, who is liable?By their nature, “black-box” models are hard to interpret. In fact, many of these LLMs have billions of parameters so I would suggest that they are not interpretable. Companies should strive for transparency and create documentation on how the model works, it’s limitations, risks, and the data that was used to train the model. Again, this is easier said than done.Related to the above, if not properly trained and accounted for, generative AI can produce biased or discriminatory output. Companies can do their best to ensure that data is diverse and representative, but this is a tall order given that many of the LLM providers do not disclose what data was used for training. In addition to taking all possible precautions with understanding the training data used, its risks and limitations, companies need to put in place a monitoring system to detect this harmful content and a mechanism to flag, prevent its distribution and correct as necessary.For companies with ESG initiatives, training LLMs consumes significant amounts of compute — meaning they use quite a bit of electricity. As you begin to deploy generative AI capability, organization’s need to be mindful of the environmental footprint and seek for ways to reduce it. There are several researchers who are looking at ways to reduce model size and accelerate the training process. As this evolves, companies should at least account for the environmental impact in their annual reports.This will be an active area of litigation for several years to come. Who is accountable if generative AI produces harmful or misleading content? Who is legally responsible? Several lawsuits are ‌pending in the U.S. court system which will set the stage for other litigation as things move forward. In addition to harmful content, what if your LLM produces a derivative work? Was your LLM trained on copyrighted or legally protected material? If it produces a data derivative, how will the courts address this? As companies implement generative AI capability — there should be controls and feedback mechanisms put in place so a course of action can be taken to remedy the situation.Generative AI holds immense promise in revolutionizing how things get done in the world, but its rapid evolution brings forth a myriad of ethical dilemmas. As companies venture into the realm of generative AI, it’s paramount to navigate its implementation with a deep understanding of established ethical guidelines. By doing so, organizations can harness the transformative power of AI while making sure they uphold ethical standards, safeguarding against potential pitfalls and harms.[1] European Commission. 2021. “Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future.” Digital-Strategy.ec.europa.eu. March 8, 2021. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.[2] European Commission. 2021. “Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future.” Digital-Strategy.ec.europa.eu. March 8, 2021. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.[3] European Commission. 2021. “Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future.” Digital-Strategy.ec.europa.eu. March 8, 2021. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.[4] European Commission. 2021. “Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future.” Digital-Strategy.ec.europa.eu. March 8, 2021. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.[5] Verma, Sahil, and Julia Rubin. 2018. “Fairness Definitions Explained.” Proceedings of the International Workshop on Software Fairness — FairWare ’18. https://doi.org/10.1145/3194770.3194776.[6] European Commission. 2021. “Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future.” Digital-Strategy.ec.europa.eu. March 8, 2021. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.[7] Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1 (9): 389–99. https://doi.org/10.1038/s42256-019-0088-2.----Towards Data ScienceDavid Sweenor, founder of TinyTechGuides is an international speaker, and acclaimed author with several patents. He is a specialist in AI, ML, and data science.David SweenorinTowards Data Science--1Miriam SantosinTowards Data Science--21Dominik PolzerinTowards Data Science--27David SweenorinTowards Data Science--1Ignacio de Gregorio--26Elvis HsiaoinUX Collective--2Hanzi FreinachtinUX Collective--3Tim Lou, PhDinTowards Data Science--8Adam Ross NelsoninTowards Data Science--5Christopher P Jones--10HelpStatusWritersBlogCareersPrivacyTermsAboutText to speechTeams



This post first appeared on VedVyas Articles, please read the originial post: here

Share the post

Generative AI Ethics

×

Subscribe to Vedvyas Articles

Get updates delivered right to your inbox!

Thank you for your subscription

×