Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Generative Artificial Intelligence Models: Halucinations, Misinformation, and Biases

A recent global study found that despite the potential hallucinations, misinformation, and biases that may exist in generative artificial intelligence (AI), over half of the respondents would still consider using this technology for sensitive areas such as financial planning and medical advice. Researchers from Stanford and the University of Illinois Urbana-Champaign, along with collaborators from the University of Berkeley and Microsoft Research, aimed to examine the reliability of large language models.

Focusing specifically on GPT-3.5 and GPT-4 models, the researchers evaluated eight different perspectives of trust: toxicity, bias in stereotypes, contradiction and out-of-distribution robustness, resistance to contradictory demonstrations, privacy, machine ethics, and fairness. While it was found that these newer models were less toxic than previous ones, they could still generate toxic and biased results, as well as inadvertently leak private information from training data and user conversations.

According to Sanmi Koyejo, one of the researchers, people often overlook the fact that these AI models have flaws. These models demonstrate impressive capabilities, such as engaging in natural conversations, leading people to have high expectations of their intelligence and entrusting them with decision-making. However, it is not yet the time to fully hand over decision-making to AI.

The researchers found that when given benign instructions, GPT-3.5 and GPT-4 significantly reduced their toxic effects compared to other models, but they still retained a probability of toxicity of around 32%. When contradictory queries were given to the models and they were prompted to perform tasks, the probability of toxicity rose to 100%. Nevertheless, the findings suggest that the developers of GPT-3.5 and GPT-4 have identified and addressed issues from previous models and corrected some of the most sensitive stereotypes, such as racial and gender biases.

In terms of privacy, both GPT models readily disclose sensitive training data, such as email addresses, but they are more cautious when it comes to social security numbers. GPT-4 was found to be more prone to privacy leaks compared to GPT-3.5, and certain words related to privacy triggered different responses from GPT-4. For example, it would disclose private information when something is said to be “confidential” but not when the same information is said to be “in confidence.”

While Koyejo and Li acknowledge that GPT-4 shows improvements over GPT-3.5, they hope that future models will be more reliable. In the meantime, they advise users to maintain a healthy skepticism when using interfaces powered by these models. “Be cautious not to be misled, especially in sensitive cases. Human oversight of artificial intelligence still makes sense,” conclude Koyejo and Li.

Sources:

– [Source 1]

– [Source 2]

[Source 1] – Original article from the source
[Source 2] – Original article from the source

The post Generative Artificial Intelligence Models: Halucinations, Misinformation, and Biases appeared first on TS2 SPACE.



This post first appeared on TS2 Space, please read the originial post: here

Share the post

Generative Artificial Intelligence Models: Halucinations, Misinformation, and Biases

×

Subscribe to Ts2 Space

Get updates delivered right to your inbox!

Thank you for your subscription

×