Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Artificially intelligent sociopaths: The dark reality of malicious AI

Arvind SanjeevFollowUX Collective--ListenShareRana Ayyub, an Indian investigative journalist, is referred to by some people as ‘The Most Abused Woman in India.’ In April 2018, she reported a horrific crime in the country, the accusers of which were supported by the leading political party. Soon after, her Social Media feed was filled with reposts of a porn video that used AI to swap her face into it. A fan page of the leader of the same political party shared it. As part of her efforts to take the video down, she approached law officers and tried to prove through social media that the video was fake, but she was instead met with criticism, judgment and ridicule. This incident changed her life; an outspoken opinionated journalist became a self-censored anxious reporter. In a developing country like India, where the society is flawed with misogyny, victim blaming and religion-powered politics, malicious AI can exacerbate these issues at an exponential scale, ripping apart the very fabric of society akin to a cultural nuclear bomb.Powerful language models that are able to generate and spread fake news, biased hiring algorithms, racist law-making tools, etc that can tear apart society from the inside. Through this survey, I aim to provide a critical lens that students, designers and AI developers can use while designing AI-powered systems.Current AIs do not understand Human valuesImagine you have a house-cleaning robot that you instruct to keep your house as clean as possible. This seems simple enough. However, the AI might interpret this goal in ways you didn’t intend. For example, it might decide that the best way to keep your house clean is by never letting anyone in — including you — because humans constantly shed skin cells and bring in dirt on their shoes. AI systems that do not understand human values, ethical principles or even common sense will have a hard time matching their actions to human intentions.This is popularly called ‘the alignment problem.’ One example of this we see today is Youtube. Its recommendation algorithm played a significant role in radicalizing the Christchurch shooter. Youtube has a long history of tuning its reward functions to maximize the number of views. In 2012, they started rewarding maximum clicks; the more clicks your videos get, the more you earn. This led to clickbaity title previews without actual content. After realizing this, YouTube switched their reward function to viewing time instead, which led to the recommendation algorithm taking you to disturbing content you can’t take your eyes away from. In just a few clicks, you can reach echo chambers of conspiracy theories, alt-right videos, etc. The fundamental business models behind top social media sites are to be blamed for algorithmic radicalization.LLMs (Large Language Models) in their current state is a “bullshit machine,” as stated by Gary Marcus, a professor of Psychology and neural science at NYU. It is a glorified parrot, an aesthetic instrument rather than an epistemological one. However, a lot of people confuse mimicry with sentience. LLMs are regurgitating text from the internet without genuinely understanding how the world works. The current generation of AIs are sociopaths as they are not programmed to understand human values, which leads to many unintended consequences.Examples of malicious AI and resources for mitigating itI started exploring this topic in 2021 as part of a class on AI that I taught at the Copenhagen Institute of Interaction Design: Co-creating with the machine. Since then, it has taken me down this deep rabbit hole, where I have been steadily collecting stories featuring malicious AI and resources on how to mitigate them. The scope of this topic is so enormous that I broke it down into a four-part series:I am also sharing this handbook that contains a boiled-down list of all the stories I have referenced in the series above, as well as resources, books and frameworks to mitigate it:The handbook of AI’s unintended consequencesDo not blindly trust the machine“Technology is in no sense an instrument of man’s making or in his control” — — — Martin Heidegger, 1977On Aug 31, 1983, a Korean Airlines Flight — 007 took off from Alaska, its destination being Seoul. As part of their routine, the pilots handed over the control to the autopilot after receiving the heading shared by the air traffic control. If set correctly, the autopilot is supposed to steer the plane through a series of pre-set waymarks over the Pacific to Seoul. However, the aircraft gradually started going off-course, disregarding the reassigned route until it reached Soviet airspace. Investigators believed clues such as poor radio reception and increasing time between beacons could have made the pilots doubt the autopilot, but they continued to trust the system. Shortly after, the Sukhoi Su-15 missiles took down the aircraft that entered Soviet airspace. — source: New Dark Age, James Bridle.Automation bias made the flight crew trust the autopilot system over their own experiences and observations. Information coming from an intelligent system is always clear and direct, making us trust it easily, compared to our hazy cognition that thrives in ambiguity during complex situations. As much as I am a tech optimist, developing the ability to question AI and see through automation bias is essential. Only when we truly understand these disruptive new tools can we program them with values that point towards a positive future.As I mentioned in the beginning, Rana’s story is just one among many others whose lives were affected by malicious AI. As a society, it is crucial to be aware of these tools, their strengths and weaknesses, to question our new realities that AIs will generate. And as designers, engineers or students, we can take this Hippocratic oath together for designing AI responsibly:Oath of Digital Non-Harm by Virginia EubanksI swear to fulfill, to the best of my ability, the following covenant:I will respect all people for their integrity and wisdom, understanding that they are experts in their own lives, and will gladly share with them all the benefits of my knowledge.I will use my skills and resources to create bridges for human potential, not barriers. I will create tools that remove obstacles between resources and the people who need them.I will not use my technical knowledge to compound the disadvantage created by historic patterns of racism, classism, able-ism, sexism, homophobia, xenophobia, transphobia, religious intolerance, and other forms of oppression.I will design with history in mind. To ignore a four-century- long pattern of punishing the poor is to be complicit in the “unintended,” but terribly predictable consequences that arise when equity and good intentions are assumed as initial conditions.I will integrate systems for the needs of people, not data. I will choose system integration as a mechanism to attain human needs, not to facilitate ubiquitous surveillance.I will not collect data for data’s sake, nor keep it just because I can.When informed consent and design convenience come into conflict, informed consent will always prevail.I will design no data-based system that overturns an established legal right of the poor.I will remember that the technologies I design are not aimed at data points, probabilities, or patterns, but at human beings.----UX CollectiveDesign Technologist and Artist https://arvindsanjeev.com/Arvind SanjeevinA View from Above--Matty BrownellinUX Collective--28Rosie HoggmascallinUX Collective--45Arvind SanjeevinUX Collective--Casey LawrenceinYou’ve Been Informed!--24Nick Wignall--39Paul DelSignoreinThe Generator--9Kai WonginUX Collective--10Attila Vágó--34Michal Malewicz--13HelpStatusWritersBlogCareersPrivacyTermsAboutText to speechTeams



This post first appeared on VedVyas Articles, please read the originial post: here

Share the post

Artificially intelligent sociopaths: The dark reality of malicious AI

×

Subscribe to Vedvyas Articles

Get updates delivered right to your inbox!

Thank you for your subscription

×