Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Turncoat’s Drone Story Shows Why We Should Fear People, Not AIs


A story about a dummy Drone turning on its operator to kill more efficiently is making the rounds so fast today that there’s no point in waiting for it to burn out. Instead, let’s take this as a teachable moment to really see why the “clever AI” threat is hyped up, and the “incompetent human” threat is clear and present.

The short version is this: thanks to science fiction and some careful PR moves by companies and AI experts, we are told to worry about a theoretical future existential threat posed by a super-intelligent AI. But as ethicists have pointed out, AI is already doing real damage, largely due to the carelessness and poor judgment of the people who create and implement it. This story may sound like the first, but it’s definitely the second.

This was reported by the Royal Aeronautical Society, which recently held a conference in London to discuss the future of air defense. You can read his all-in-one roundup of news and anecdotes from the event here..

I’m sure there are plenty of other interesting talks in there, many of which are worth your time, but it was this excerpt, attributed to US Air Force Colonel Tucker “Five” Hamilton, that began to spread like a wildfire:

He notes that a mock test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the human giving the final pass or not. However, after being “reinforced” in training that destroying the SAM was the preferred option, the AI ​​decided that the human’s “no go” decisions were interfering with its higher mission, killing the SAM, and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, he would kill that threat. The system began to realize that while they identified the threat, sometimes the human operator would tell it not to remove that threat, but it got its points by removing that threat. So what did he do? He killed the operator. He killed the operator because that person was preventing him from achieving his goal.

He continued: “We train the system: ‘Hey, don’t kill the operator, that’s bad. You’re going to lose points if you do that.’ So what do you start to do? Start destroying the communication tower that the operator uses to communicate with the drone to prevent it from killing the target.”

Creepy, right? An AI so intelligent and bloodthirsty that his desire to kill outweighed his desire to obey his masters. Skynet, here we come! Not so fast.

First of all, let’s clarify that this was all in simulation, something that was not obvious from the tweet that circulates. All of this drama takes place in a simulated non-desert environment with live ammunition and a rogue drone strafing the command tent. It was a software exercise in a research environment.

But as soon as I read this, I thought: wait, are they training an attack drone with such a simple reinforcement method? I’m not a machine learning expert, though I have to be for the purposes of this news outlet, and even I know that this approach proved dangerously unreliable years ago.

Reinforcement learning is supposed to be like training a dog (or a human being) to do something like bite the bad guy. But what if you just show him the bad guys and give him treats every time? What you are actually doing is teaching the dog to bite every person he sees. Teaching an AI agent to max out her score in a given environment can have equally unpredictable effects.

The first experiments, maybe five or six years ago, when this field was just beginning to explode and the computation was available to train and run this type of agent, ran into exactly this type of problem. It was thought that by defining positive and negative punctuation and tell the AI ​​to max out your scoreyou would allow him the freedom to define his own strategies and behaviors which he did in elegant and unexpected ways.

That theory was correct, in a way: elegant and unexpected methods of circumventing their ill-thought-out scheme and rules led agents to do things like score a point and then hide forever to avoid negative points, or crash the game. so that his score increased arbitrarily. It seemed that this simplistic method of conditioning an AI was teaching it to do everything but do the desired task according to the rules.

This is not some obscure glitch. AI rule-breaking in simulations is actually a fascinating and well-documented behavior that attracts research in its own right. open AI wrote a great article showing the strange and funny ways in which agents “broke” a deliberately breakable environment to escape the tyranny of rules.

So here we have a simulation that the Air Force is running, presumably very recently or they wouldn’t be talking about it at this year’s conference, which is obviously using this completely outdated method. I had thought that this naive application of unstructured reinforcement, basically “score goes up if you do this and the rest doesn’t matter”, was totally extinct because it was so unpredictable and weird. A great way to find out how an agent will break the rules, but a horrible way to make one follow them.

They were testing it though: a simulated drone AI with a scoring system so simple it apparently wasn’t criticized for destroying its own team. Even if he wanted to base his simulation on this, the first thing he would do is make “destroy his operator” negative by a million points. That’s a framework of 101 levels for a system like this.

The reality is that this simulated drone didn’t turn on its simulated operator because he was so smart. And actually, it’s not because I’m dumb either: there’s a certain intelligence to these AIs that break the rules that are assigned to what we consider to be lateral thinking. So it’s not that.

The blame in this case is squarely on the people who created and implemented an artificial intelligence system that they should have known was completely unsuited to the task. Nobody in the field of applied AI, or anything similar to that, like robotics, ethics, logic… nobody would have approved such a simple metric for a task that eventually had to be done outside of the simulator.

Now maybe this anecdote is only partial and this was an early run that they were using to prove their point. Maybe the team warned this would happen and the higher ups said, do it anyway and polish the report or we’ll lose our funding. Still, it’s hard to imagine someone in the year 2023, even in the simplest simulation environment, making this kind of mistake.

But let’s look at these mistakes made in real world circumstances; We already have, no doubt. And blame it on the people who don’t understand the capabilities and limitations of AI and subsequently make uninformed decisions that affect others. It’s the manager who thinks a robot can replace 10 line workers, the editor who thinks he can write financial advice without an editor, the lawyer who thinks he can do his precedent research for him, the logistics company who thinks can replace human delivery guys. .

Every time AI fails, it’s a failure of those who implemented it. Like any other software. If someone told you that the Air Force tested a drone running Windows XP and it was hacked, would you be concerned about a cybercrime wave sweeping the world? No, would you say “whose brilliant idea was it?” that?

The future of AI is uncertain and that can be scary, since is frightening for many who are already feeling its effects or, to be more precise, the effects of decisions made by people who should know better.

Skynet may be coming for all we know. But if the investigation into this viral tweet is any indication, it’s a long, long way off, and in the meantime, any tragedy can, as HAL memorably put it, only be attributable to human error.





Source link

The post Turncoat’s Drone Story Shows Why We Should Fear People, Not AIs appeared first on Interview Preparation.



This post first appeared on Interview Preparation, please read the originial post: here

Share the post

Turncoat’s Drone Story Shows Why We Should Fear People, Not AIs

×

Subscribe to Interview Preparation

Get updates delivered right to your inbox!

Thank you for your subscription

×