766
The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
The link was missing a slash: https://www.reuters.com/article/idUSL1N38023R/
This is typically how stories like this go. Like most animals, humans have evolved to pay extra attention to things that are scary and give inordinate weight to scenarios that present danger when making decisions. So you can present someone with a hundred studies about how AI really behaves, but if they've seen the Terminator that's what sticks in their mind.
Even the Terminator was the byproduct of this.
In the 50s/60s when they were starting to think about what it might look like when something smarter than humans would exist, the thing they were drawing on as a reference was the belief that homo sapiens had been smarter than the Neanderthals and killed them all off.
Therefore, the logical conclusion was that something smarter than us would be an existential threat that would compete with us and try to kill us all.
Not only is this incredibly stupid (i.e. compete with us for what), it is based on BS anthropology. There's no evidence we were smarter than the Neanderthals, we had cross cultural exchanges back and forth with them over millennia, had kids with them, and the more likely thing that killed them off was an inability to adapt to climate change and pandemics (in fact, severe COVID infections today are linked to a Neanderthal gene in humans).
But how often do you see discussion of AGI as being a likely symbiotic coexistence with humanity? No, it's always some fearful situation because we've been self-propagandizing for decades with bad extrapolations which in turn have turned out to be shit predictions to date (i.e. that AI would never exhibit empathy or creativity, when both are key aspects of the current iteration of models, and that they would follow rules dogmatically when the current models barely follow rules at all).
That highly depends on the outcome of a problem. Like you don't test much if you program a Lego car, but you do test everything very thorough if you program a satellite.
In this case the amount of testing needed to allow a killerbot to run unsupervised will probably be so big that it will never be even half done.