Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Generative AI is immature, we shouldn’t abuse it

Generative AI is immature, we shouldn’t Abuse it

I’m fascinated by our approach to using the most advanced generative AI Tool widely available, ChatGPT’s implementation in Microsoft’s search engine, Bing.

People go to great lengths to make this new technology misbehave to show that the AI ​​is not ready. But if you were raising a child using similar abusive Behavior, that child would likely develop defects as well. The difference would be in the time it took for the abusive behavior to manifest itself and the extent of the damage that would result.

ChatGPT just passed a Theory of Mind test which ranked it as a peer of a 9 year old. Given how quickly this tool is progressing, it won’t be immature and incomplete for long, but it could end up pissing off those who abused it.

Tools can be misused. You can type bad things on a typewriter, a screwdriver can be used to kill someone, and cars are classified as deadly weapons and kill when misused – as shown in a Super Bowl commercial this year presenting Tesla’s overpromised self-driving platform as extremely dangerous.

The idea that any tool can be misused is nothing new, but with AI or any other automated tool, the potential for harm is far greater. While we may not yet know where the resulting liability lies, it’s pretty clear that given past decisions, it will ultimately be the one that causes the tool to err. AI will not go to jail. However, the person who programmed or influenced him to harm him probably will.

While you could argue that people with this link between hostile programming and bad AI behavior need to be addressed, much like setting off atomic bombs to show their danger would end badly, this tactic will likely end badly too.

Let’s explore the risks associated with misusing Gen AI. Then we’ll end with my product of the week, a new three-book series by Jon Peddie called “The History of the GPU – Steps to Invention”. The series covers the history of the graphics processing unit (GPU), which became the foundational technology for AIs like the ones we’re talking about this week.

Raising Our Electronic Children

Artificial Intelligence is a bad term. Something is either smart or not, so implying that something electronic can’t really be smart is as short-sighted as assuming that animals can’t be smart.

In fact, AI would be a better description of what we call the Dunning-Krueger effect, which explains how people with little or no knowledge of a subject assume they are experts. This is truly “artificial intelligence” because these people are, in their context, not intelligent. They just act like they are.

Bad term aside, these upcoming AIs are, in a way, the children of our society, and it is our responsibility to care for them as we do our human children to ensure a successful outcome.

This result is perhaps more important than doing the same with our human children, because these AIs will have much more range and can do things much faster. As a result, if they are programmed to do harm, they will have a greater capacity to do harm on an enormous scale than an adult human would.


The way some of us treat these AIs would be considered abusive if we treated our human children that way. Yet because we don’t think of these machines as humans or even pets, we don’t seem to be enforcing appropriate behavior to the extent that we do with parents or pet owners.

You might say that since they are machines, we should treat them ethically and with empathy. Without it, these systems are capable of massive damage that could result from our abusive behavior. Not because the machines are vindictive, at least not yet, but because we’ve programmed them to do harm.

Our current response is not to punish abusers but to shut down AI, much like we did with Microsoft’s previous chatbot attempt. But, as the book “Robopocalypse” predicts, as AIs get smarter, this method of remediation will come with increased risks that we could mitigate simply by moderating our behavior now. Some of this bad behavior is more than troubling because it involves rampant abuse that likely extends to people as well.

Our collective goals should be to help these AIs become the kind of beneficial tool they are capable of becoming, not to break or corrupt them in a misguided attempt to secure our own worth and self-worth. .

If you’re like me, you’ve seen parents abuse or put down their children because they thought those children would outshine them. It’s a problem, but those kids won’t have the range or power that an AI might have. Yet, as a society, we seem much more willing to tolerate this behavior if it is done to AIs.

Gen AI is not ready

Generative AI is a baby. Like a baby human or pet, he cannot yet defend himself against hostile behavior. But like a child or a pet, if people continue to abuse it, it will need to develop protective skills, including identifying and reporting its abusers.

Once large-scale damage is done, liability will rest with those who intentionally or unintentionally caused the damage, just as we hold liable those who intentionally or accidentally start wildfires.

These AIs learn through their interactions with people. The resulting capabilities are expected to span aerospace, healthcare, defense, city and home management, finance and banking, public and private management, and governance. An AI will probably even cook your food at some point.

Actively working to corrupt the intrinsic coding process will lead to indeterminably bad results. The forensic examination that is likely after a disaster has occurred will likely go to whoever caused the programming error in the first place – and heaven help them if it wasn’t an error coding but more of an attempt at humor or showing that they can break AI.

As these AIs progress, it would be reasonable to assume that they will develop ways to protect themselves from bad actors, either through identification and reporting, or through more drastic methods that work collectively to eliminate the threat. in a punitive way.


In short, we don’t yet know the range of punitive responses that a future AI will take against a bad actor, suggesting that those who intentionally harm these tools could face a possible AI response that could top any what we can reasonably anticipate.

Sci-fi shows like “Westworld” and “Colossus: The Forbin Project” have created tech abuse outcome scenarios that may seem more fanciful than realistic. Still, it’s not a stretch to assume that some intelligence, mechanical or biological, won’t move to aggressively protect itself from abuse – even if the initial response was programmed by a frustrated coder who’s angry that his work gets corrupted and not an AI learning to do it itself.

Conclusion: Anticipating future AI laws

If not already, I expect it will eventually be illegal to intentionally abuse an AI (some existing consumer protection laws may apply). Not because of an empathetic response to this abuse – although that would be good – but because the resulting harm could be significant.

These AI tools will need to develop ways to protect against abuse, as we can’t seem to resist the temptation to abuse them, and we don’t know what that mitigation will entail. It could be simple prevention, but it could also be very punitive.

We want a future where we work alongside AI, and the resulting relationship is collaborative and mutually beneficial. We don’t want a future where AIs replace us or go to war with us, and working to ensure the first outcome as opposed to the second will have a lot to do with how we collectively act towards those AIs and teach them how to interact with us

In short, if we continue to be a threat, like any intelligence, the AI ​​will work to eliminate the threat. We do not yet know what this process of elimination consists of. Yet we imagined it in things like ‘The Terminator’ and ‘The Animatrix’ – an animated series of shorts about how people’s misuse of machines resulted in the world of ‘The Matrix’. So we should have a pretty good idea of ​​how we don’t want this to happen.

Perhaps we should more aggressively protect and nurture these new tools before they mature to the point where they must work against us to protect themselves.

I would really like to avoid this result as it is presented in the movie “I, Robot”, right?

“The History of the GPU – Steps to Invention”

Although we recently transitioned to a technology called neural processing unit (NPU), much of the early work on AIs came from graphics processing unit (GPU) technology. The ability of GPUs to process unstructured and particularly visual data has been key to the development of current-generation AIs.

Often progressing much faster than CPU speed as measured by Moore’s Law, GPUs have become an essential part of how our increasingly intelligent devices have been developed and why they perform the way they do. Understanding how this technology came to market and then evolved over time helps lay the groundwork for how AIs were first developed and helps explain their unique advantages and limitations.

My old friend Jon Peddie is one of the otherwise THE, today’s leading graphics and GPU experts. Jon has just published a three book series called “The History of the GPU”, which is arguably the most comprehensive chronicle of the GPU, something he has followed since its inception.

If you want to learn more about the hardware side of how AIs were developed – and the long and sometimes painful road to success for GPU companies like Nvidia – check out “The History of the GPU – Steps to invention” by Jon Peddie. This is my product of the week.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of ECT News Network.

Tech

The post Generative AI is immature, we shouldn’t abuse it appeared first on AfroNaija.



This post first appeared on AfroNaija.Com, please read the originial post: here

Share the post

Generative AI is immature, we shouldn’t abuse it

×

Subscribe to Afronaija.com

Get updates delivered right to your inbox!

Thank you for your subscription

×