The company I work for is testing out an AI phone answering service. The thing sends messages to the wrong people and thinks one of my coworkers only exists if you don't include her last name when asking for her.
But sure, let's give them weapons.
The company I work for is testing out an AI phone answering service. The thing sends messages to the wrong people and thinks one of my coworkers only exists if you don't include her last name when asking for her.
But sure, let's give them weapons.
I'm thinking your company is probably a little less knowledgeable about how to utilize AI than the US Air Force.
Do you want Kickpuncher? Because that's how we get Kickpuncher.
This would be useful if we ever fight aliens but I'm afraid we get a Scapped Princess or Battlestar Galactica outcome and they get hacked or rebel. Really need to make sure they LOVE humanity more than dogs do.
It's Sarah O'Connor in this timeline.
All the years of training a pilot would receive and then on top of that all of the nuances that they would learn along the way of becoming a skilful pilot vs. a computer that someone throws a chat prompt into
Why do people have so much faith in this vaporware ๐ It's good at very specific things but people are acting as though its one size fits all.
The AI pilot receives all that same training, and then some. It probably has the combined training of every pilot mission ever flown. People on Lemmy might think AI is a one size fits all solution, but the military understands that it needs specialized training. It will have received millions of scenarios and flight techniques as part of its LLM.
Okay that's a great counter argument and I can't say that a model that's being trained on every pilot mission ever flown wouldn't be an excellent advantage.
But so far we've seen companies try the exact same thing with cars. And no one has come up with something which is close to being allowed to be fully autonomous without it killing people or causing traffic incidents.
So how can we ever expect something such as an LLM to understand the nuances of war? We already struggle at attributing blame with current technology, this just sounds like another great excuse for blowing up people in a foreign country and no one has to take any responsibility for it.
Those are valid concerns and ones they don't really seem to have answered yet, which makes the pace at which they're progressing irresponsible. There was an article a year or so about a simulated experiment with an AI pilot where it got points for bombing a target successfully, and lost points for not bombing the target. But it had to get approval from a human operator before striking the target. The human told it no, so it killed the human, and then bombed the target. So they told it that it can't kill the human or it will lose all its points. So it attacked the communication equipment that the human used to tell it no before the human could tell it no and then bombed the target. This was all a simulation, so no humans were actually killed, but that raised all sorts of red flags. I'm sure they've put hundreds of hours into research since then, but ultimately it's hard not to feel like this will backfire. Perhaps that's just because a lifetime of being conditioned by Terminator and Matrix movies, but some of the evidence so far like that experiment proves that it's not an outlandish concern. I don't see how humans can envision ever possible scenario in which the AI might go rogue. Hopefully they have a great off switch.
And more critically an AI pilot can be copied and put into a second plane, and it will perform exactly the same.
Also, AI driven plans can operate at G forces that are impossible for human pilots.
Welcome to Greendale, Lemmings! You're already accepted!
Rules: