Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Amnesty International Under Fire for Using AI-Generated Images in Report on Colombian Police Brutality

The systemic brutality used by Colombian Police to quell national protests in 2021 was real and is well documented. Photos recently used by Amnesty International to highlight the issue, however, were not.

The international human rights advocacy group has received criticism for posting images generated by artificial intelligence (AI) in order to promote their reports on social media, and has since removed them. The images, which include one of a woman being dragged away by police officers, depict scenes during protests that swept across Colombia in 2021. But a more than a momentary glance at the images reveals that something is off.

The faces of the protesters and police are smoothed-off and warped, giving the image a dystopian aura. The tricolour carried by the protester has the right colours – red, yellow, and blue – but in the wrong order, and the police uniform is outdated.

While Amnesty and other observers have documented hundreds of cases of human rights abuses committed by Colombian police during the wave of unrest in 2021, among them violence, sexual harassment, and torture, photojournalists and media scholars warn that the use of AI-generated images could undermine Amnesty’s own work and feed conspiracy theories.

“We are living in a highly polarised era full of fake news, which makes people question the credibility of the media. And as we know, artificial intelligence lies. What sort of credibility do you have when you start publishing images created by artificial intelligence?” said Juancho Torres, a photojournalist based in Bogotá.

At least 38 civilians were killed by state forces during 2021’s national strike, which was sparked by an unpopular tax reform and then fanned by the brutal police response. In cases documented by Bogotá-based Temblores, women were abducted, taken to dark buildings, and raped by groups of policemen.

Amnesty International said it had used photographs in previous reports but chose to use the AI-generated images to protect protesters from possible state retribution. To avoid misleading the public, the images included text stating that they were produced by AI.

“We have removed the images from social media posts, as we don’t want the criticism for the use of AI-generated images to distract from the core message in support of the victims and their calls for justice in Colombia,” Erika Guevara Rosas, director for Americas at Amnesty, said.

“But we do take the criticism seriously and want to continue the engagement to ensure we understand better the implications and our role to address the ethical dilemmas posed by the use of such technology.”

Gareth Sella was blinded in his left eye when a police officer in Bogotá shot him with a rubber bullet at the protests. He argued that hiding the identity of protesters makes sense to protect them from ending up in jail on inflated charges.

“As the UN has documented, the state has continued pursuing protesters and more than 100 are in jail, many with disproportionate sentences, such as terrorism, to make an example of them. Hiding our identities seems sensible to me given that two years on we continue living in the fear that we could be jailed at any moment or even that they come after us on the streets,” Sella said.

The use of AI-generated images stitch together photographs previously taken by humans to create new ones, raising questions of plagiarism and the future of photojournalism. Torres said Amnesty’s use of AI images was an insult to the photojournalists who cover protests from the frontline.

“The power for a journalist is to recreate reality and what they see – something which during the national strike, many reporters, photographers, and cameramen risked their lives to do. I have a friend who lost an eye. Using AI images not only loses that reality, it loses the connection between journalists and people.”

While AI-generated images may offer some protection for the identity of protesters, it raises concerns about the credibility of media and the role of technology in photojournalism. The use of AI-generated images also raises ethical dilemmas and the question of whether it undermines the role of journalists in accurately reporting reality.

Despite Amnesty’s intention to raise awareness of police brutality during the protests, using AI-generated images may have undermined the credibility of their report and fed into conspiracy theories. As AI technology advances and becomes more accessible, it is important for media organizations to be transparent about their use of such technology and consider its implications for the integrity of their reporting.

The ongoing protests in Colombia highlight the need for continued activism and advocacy for human rights and justice. As technology continues to play a larger role in our lives, it is essential to consider the ethical implications of its use and strive to use it in a way that upholds the integrity of journalism and supports the pursuit of truth.



This post first appeared on TS2 Space, please read the originial post: here

Share the post

Amnesty International Under Fire for Using AI-Generated Images in Report on Colombian Police Brutality

×

Subscribe to Ts2 Space

Get updates delivered right to your inbox!

Thank you for your subscription

×