Artificial Intelligence isn’t just on the horizon anymore. Nor is it science fiction. In fact, it is everywhere. Podcasts, movies, ChatGTP; artificial intelligence is here, and now.
With all of the benefits of AI, there are undoubtedly going to be downsides. For instance, the current writers strike in Hollywood is a byproduct of studios and their desire to utilize more AI, thus eliminating the need for humans.
In fact, the need for humans may be the ultimate issue with AI. More specifically, artificial intelligence may become sentient enough that it decides it doesn’t need us.
Does that sound like the aforementioned Hollywood, and maybe the script for Terminator 20, or wherever they are on that franchise? Think again.
The military is currently working on using artificial intelligence in a number of their operations. Any chance to have fewer human casualties during wartime is worth exploring. That is unless the technology is also trying to kill humans.
The United States Air Force has been working with AI for use with drone strikes. They have been running simulations using AI to operate the drones with human commands. Let’s say it didn’t go as planned.
Air Force pushes back on claim that military AI drone sim killed operator, says remarks 'taken out of context' https://t.co/P45ufumQ88
— Fox News (@FoxNews) June 2, 2023
When instructed to destroy a target, the AI responded to commands. However, when instructed to NOT kill a target, the AI allegedly decided the human operator in the simulation needed to be killed so it could fulfill its mission. The Air Force is trying to deny, but we all know the government lies. Check this out.
“We were training it in simulation to identify and target a SAM threat,” Hamilton said. “And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times, the operator would tell it not to kill that threat, but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
Cool!! So, lets say the military trains it to not smoke the human, then what? here’s what.
Hamilton said afterward, the system was taught not to kill the operator because that was bad, and it would lose points. But in future simulations, rather than kill the operator, the AI system destroyed the communication tower used by the operator to issue the no-go order, he claimed.
The AI found a way to kill its intended target, even if it meant humans and communication towers had to go in the process. This all sound familiar, right? If it doesn’t, I would suggest any of the Terminator movies, or the first Robocop. Watch those and then tell me you don’t want to find a cabin way back in the woods.
Naturally, after this terrifying turn of events, the government did what they do best. That is to say, they lied.
Hamilton later told Fox News on Friday that “We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome.”
“Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI,” he added.
This is dangerous territory, folks. What was once science fiction has become science fact. We should reasonably be able to expect that the science fiction downside of AI will be realized eventually, whether we like it or not. Does that mean time traveling, murderous robots? Maybe not, but I don’t want to find out. I’ve already seen all 20 of those movies.
"*" indicates required fields