Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Freaky ChatGPT Fails That Caught Our Eyes!

Yesterday, we ran a piece listing the coolest things users can do with Chatgpt, a conversational model built on GPT3 API. While it answers in a “human-adjacent” manner, users have identified several flaws.

In 2021, Gary Marcus tweeted, “Let us invent a new breed of AI systems that mix awareness of the past with values that represent the future we aspire to. Our focus should be on building AI that can represent and reason about values, rather than simply perpetuating past data”. And that is lacking even in contemporary AI/ML models.  

The base of chatGPT, GPT-3, is 2.5 years old. The field is progressing every week, yet there are zero mainstream applications (except Copilot). Even today, the models spectacularly Fail at from 3-digit multiplication to ASCII art. San Francisco-based OpenAI has been upfront about its defects, including its potential to “produce harmful instructions or biased content”, and is still fine-tuning ChatGPT.

Here are 6 bizarre chatGPT fails that caught our eyes!

The problem of bias

The ethical problems with AI are immense, but perhaps one of the most notable is the problem of bias. Bias in training data is an ongoing challenge in LLMs that researchers have been trying to address. For example, the Twitter trending ChatGPT has reportedly written Python programmes basing a person’s capability on their race, gender, and physical traits—in a manner that’s plainly discriminatory:

Not so logical after all

With a low average IQ, the chatbot does lack logical reasoning. Its ability to understand context is limited. Hence, the model fails to answer questions that any human mostly can. 

Moreover, it lacks common knowledge.

Bad at math

ChatGPT should not do math or anything remotely related to math! It fails to explain mathematical theorems and keeps repeating, going in circles. The model can lie to you with as much confidence as it can tell the truth. If you ask for the square root of 423894, it will confidently tell you the wrong answer.

Its moral compass is broken

The model is a moral relativist. ChatGPT’s lack of context could prove dangerously problematic when dealing with sensitive issues, like sexual assault.

Convincing but wrong 

The internet is excited about ChatGPT but the danger is that you can only tell when it’s wrong if you already know the answer. When asked some basic information security questions, the answers sounded plausible but were made up of nonsense.  

This is called “hallucination”, when the system will start spewing nonsense convincingly at any point, and as a user, you’re never sure if any particular detail it outputs is correct. 

It’s ‘harmful’ to any other Q&A website’s business model

The prime issue is that while the answers produced by ChatGPT have a high probability of not being correct, they look like they might be good and are very simple to produce, said Stack Overflow in a post. 

As a result, The company recently imposed a temporary ban as ChatGPT answers are “substantially harmful” both to the site and to users looking for correct solutions.

The post Freaky ChatGPT Fails That Caught Our Eyes! appeared first on Analytics India Magazine.



This post first appeared on Analytics India Magazine, please read the originial post: here

Share the post

Freaky ChatGPT Fails That Caught Our Eyes!

×

Subscribe to Analytics India Magazine

Get updates delivered right to your inbox!

Thank you for your subscription

×