Pentagon AI more ethical than adversaries’ because of ‘Judeo-Christian society,’ USAF general says

L4sBot@lemmy.worldmod to Technology@lemmy.world – 350 points –
Pentagon AI more ethical than adversaries’ because of ‘Judeo-Christian society,’ USAF general says
defenseone.com

Pentagon AI more ethical than adversaries’ because of ‘Judeo-Christian society,’ USAF general says::The path to ethical AI is a “very important discussion” being held at DOD’s “very highest levels,” says service’s programs chief.

113

You are viewing a single comment

tl;dr: The headline is false; the general did not actually say that. I thought it sounded wrong, so I watched the video that the article linked to, to check. Sure enough, it was wrong. However, the reality may not be any more reassuring.


Hypothesis: Like, no, that's obviously wrong; either the headline is trash or the general made a whole tossed salad with mango sauce out of whatever the people working on it said. (stated before further investigation; stay tuned)


Updating: https://youtu.be/wn1yEovtYRM?t=3459


Okay, wow.

So the speaker is saying this at the end of the panel, in response to a question asking about the use of autonomous weapons.

They want to talk about who's trusted to make the decision of whether to employ lethal force in a combat situation: a human American soldier, who might be exhausted and not thinking clearly, or an algorithm that doesn't get tired.

And one thing they mention is that an enemy might not have ethics that would lead them them even care about that distinction. And they express that as "Judeo-Christian morality".

That doesn't sit right with me. It sounds to me, in that moment, like they're implying that people from other cultures could be less moral, and that we should be willing to be more free with our weapons towards such people. That sounds to me like the sort of bullshit that came out of the Vietnam War.

But the rest of the answer sounds like they're trying to point at the problem of making command decisions in scenarios where the opponent might deploy autonomous weapons first. If the enemy has already handed decision-making over to an algorithm, how does that affect what we should do?

And they're maybe expressing that to their expected audience — mind you, the Air Force is heavily infiltrated by far-right Christian radicals — in a way that they hope makes sense.


Conclusion: The headline is incorrect; the general did not actually say that a Pentagon AI would be more ethical for any reason; he was talking about the human ethical decision of whether to trust AI to make decisions. But what he did say is complicated and scary for different reasons, including the internal culture of the US Air Force.

That doesn't sit right with me. It sounds to me, in that moment, like they're implying that people from other cultures could be less moral, and that we should be willing to be more free with our weapons towards such people.

This is, unfortunately, how many, many very religious people think. And it's not only insulting for everyone not following their beliefs, but also terrifying in my opinion.

People who believe their god is the only thing that makes them moral aren't really moral. Because then they never consider why it's important to, you know, not be an asshole. It's just compliance.

And the terrifying part is that since their only frame of reference regarding what "good" is would be whatever their religion dictates, it's always on the verge of breaking completely. You just need to listen to the wrong interpretation at the wrong moment in your life.

4 more...