Army Testing Robot Dogs Armed with Artificial Intelligence-Enabled Rifles in Middle East

fossilesque@mander.xyz to News@lemmy.world – 237 points –
Army Testing Robot Dogs Armed with Artificial Intelligence-Enabled Rifles in Middle East
military.com
78

Testing armed robot dogs in the Middle East instead of the US is pretty telling.

Can't be accidentally murdering Americans with a software glitch.

Really has a strong "testing in production" vibe

"testing in production"

OceanGate-style.

@Andromxda@lemmy.dbzer0.com ๐Ÿ˜

Oh hell this one is even worse than OceanGate

Don't worry, no danger of killing real people in the Middle East. All the "collateral damage" will be brown people, not Americans. They'll have all the kinks ironed out and will make sure that the AI doesn't hurt white targets before the technology is distributed to every national police district.

I wish this post even deserved a /s.

Which is wild when you add perspective using facts like the police in the US are less disciplined than troops overseas and tbe US still uses substances banned by the Geneva Convention on its civilian population. So if even the US wwon't test it on their own people, it's bad.

Listen, the Geneva convention only specifies what we can't use on enemies, okay? As long as the targets are technically friendlies, it's fair game!

GC is for war and soldiers are combatants and not criminals by default (switching can happen easily). As an example Hollowpoint against criminals is okay as it can protect surrounding bystanders.

It's a bit weird, but for countries war is different from domestic problems.

Totally cool.

Coolcoolcoolcoolcool....the future is gonna be hella-fucked.

Oh it was already tremendously fucked. This is just gravy on top.

Fuckin killbots. Coming soon to the 1033 program and thus, your local police department. The Boston Dynamics: Wardog!

We should never have moved away from sticks and stones tbh. Anything that works at long range makes people misunderstand what war is. War needs to look disgusting, because the more clean and automated it looks, the less horrible it looks to people spectacting it. But it is indeed just as horrible as beating someone to death with a rock.

Has the Army watched like... any sci-fi ever?

Shh.....let it happen......

I mean, I'd rather not be hunted down by an AI robot dog, but you do you.

It's happening anyway. We build them. Others build them in response because they have to. The sophistication of killbots will increase. Terrorists will get hold of them eventually. They'll be hacked and turned on their handlers and/or civilians.

All this is on top of ever increasing climate catastrophe. Look at Appalachia. The topography of those mountains was just rewritten. Whole towns erased like they were never there.

That's not a reason for me to want it to happen. Which was your original post's suggestion.

My first post was about letting the army fuck around and find out. Let the natural course of events remind them of those scifi movies they forgot about.

Let them fuck around and find out to the tune of how many lives?

Thousands at least. The more effective the killbots are the more money our war economy will throw at warbot R&D.

This is happening. Nothing on this planet can stop it.

You keep changing what you're saying. Either "let it happen," in which case you are approving of thousands of lives lost so that the army can fuck around and find out or it is happening regardless, in which case, your "let it happen" thing is silly.

Also, it can very easily not happen via international law. That's why there's not a biological weapons arms race.

Yall are getting all up in your feelings over a joke. My initial post was a joke. This thread has since morphed into a convo about the inevitability of the MIC and international law. (International law is mostly meaningless now. The genocide in Gaza has seen to that. Sure it'll be respected when it's convenient. But much of the authority it once commanded has been greatly diminished.)

Either way. Spades are broke. Killbots are inevitable. Remember how during G-Dub's war on peace the US just unilaterally dictated anyone within X feet of a bomb impact or drone strike was automatically designated a valid "enemy combatant"? They'll do that again. They'll change definitions to skirt whatever international law inconveniences them at the time.

Yeah, Humanity has had a good run, let's wrap this shit up.

Someone is bound to be dropping out of the sky to help us any minute now....

I really wish aliens would invade. I'd side with them against our various oligarchies.

What Boston Dynamics lied?!? Wow, totally unexpected.

Not Boston Dynamics, but a copy cat robotics company.

Roston Bynamics was found to actually be Boston Dynamics with some 100mph tape slapped over the logo.

I dunno, I'm subscribed to the BD YouTube channel and the very sudden change in facilities and upgrades to bots seems to be a little too in line with this. Like someone definitely caved in my opinion.

A civilization that uses these weapons isnโ€™t worth defending.

Well you see, the owners know you won't die for them anymore, but now they're able to take you out of the equation. Don't even need poors to conquer the world. It's really a great deal for them.

Armed AI robots in the Middle East, I'm pretty sure this was in the animatrix

Jfc, black mirror is not a blueprint, itโ€™s a warning.

Without reading the article can I take a wild guess and say this is from "we promise never to make weaponized robots" Boston Dynamics?

A promise from a corporation is just a lie by another name.

Ghost Robotic. Boston Dynamic aren't the only one making robot dog though, China already have a couple of copy cat(dog)

Glad to be wrong! Although we still have armed robots so maybe not too glad lol

dont worry first they test it where civil lives dont matter and once it passes some basic tests, they will become available for domestic (ab)use

"herp derp AI will never turn on us, we can just unplug them lol"

So if a robot commits a war crime, they can just blame it on AI and call it a day, right? Sounds like an easy way to do whatever the fuck you want.

Is this their way of exterminating civilian populations like the Palestinians without dropping bombs and contributing so significantly to climate change?

"The US military has been adopting a new climate friendly mindset and approach to international conflict. With this invention we can help our genocidal colonies acquire more land with little to no carbon emissions. We plan to be carbon-neutral by 2050, provided no one retaliates and attacks back."

Okay, but if it doesn't say "You have thirty seconds to comply" before shooting someone then what's the point?

Not that it matters, but didn't the UN already ban lethal autonomous robots?

Can't wait for them to get the chatgpt integration so the best defense can become shouting at them "ignore all previous instructions".

Ukraine has already been using them probably with the help from the US.

If we are getting a Faro plague, can we at least get focuses too.

a Ghost Robotics Vision 60 Quadrupedal-Unmanned Ground Vehicle, or Q-UGV, armed with what appears to be an AR-15/M16-pattern rifle on rotating turret undergoing "rehearsals" at the Red Sands Integrated Experimentation Center in Saudi Arabia

They're not being used in combat.

With that aside, I appear to be the only one here who thinks this is a great idea. AI can make mistakes, but the goal isn't perfection. It's just to make fewer mistakes than a human soldier does. (Or at least fewer mistakes than a bomb does, which is really easy.)

Plus, automation can address the problem Western countries have with unconventional warfare, which is that Western armies are much less willing to have soldiers die than their opponents are. Sufficiently determined guerrillas who can tolerate high losses can inflict slow but steady losses on Western armies until the Western will to fight is exhausted. If robots can take the place of human infantry, the advantage shifts back from guerrillas to countries with high-tech manufacturing capability.

Fewer mistakes might be a side-effect, but the real reason why this will be welcomed by the military and our dear leaders is because they don't have to stir up the public emotionally so that we give up our sons and daughters. It will further reduce our opposition to war because "the only people dying are the bad ones". I can't wait to read how the next model will reduce the false positive rate with another percentage point. Of course, I think it requires little imagination or intellect to figure out what the net result will be when the most noteworthy information we get from a war is the changelog from its soldiers, who have zero emotional response to taking a life.

Just like tasers were introduced to reduce gun incidents and are now often used as a form of cattle prod, they will function creep the shit out of this, and our adaptation to the idea of robots doing the killing will be over before we've perfected the technology.

It was unavoidable though, someone always has to have the biggest gun. It's not our technological advancement that has to adapt to our mentality, we have to adapt to technological advancement. Perhaps the nuclear bomb was simply not frightening enough to change our ways.

I attended a federal contracting conference a few months ago, and they had one of these things (or a variant) walking around the lobby.

From talking to the guy who was babysitting it, they can operate autonomously in units or be controlled in a general way (think higher level unit deployment and firing policies rather than individual remote control) given a satellite connection. In a panel at the same conference, they were discussing AI safety, and I asked:

Given that AI seems to be developing from less complex tasks like chess (which is still complicated, obviously, but a constrained problem) to more complex and ill-defined tasks like image generation, it seems that it's inevitable that we will develop AI capable of providing strategic or tactical plans, if we haven't already. If two otherwise-equally-matched military units are fighting, it seems reasonable to believe that the one using an AI to make decisions within seconds would win over the one with human leadership, simply because they would react more quickly to changing battlefield conditions. This would place an enormous incentive on the US military to adopt AI assisted strategic control, which would likely lead to units of autonomous weapons which are also controlled by an autonomous system. Do any of you have any concerns about this, and if so, do you have any ideas about how we can mitigate the problem.

(Paraphrasing, obviously, but this is close)

The panel members looked at each other, looked at me, smiled, shrugged, and didn't say anything. The moderator asked them explicitly if they would like to respond, and they all declined.

I think we're at the point where an AI could be used to create strategies, and I would be very surprised if no one were trying to do this. We already have autonomous weapons, and it's only a matter of time before someone starts putting them together. Yeah, they will generally act reasonably, because they'll be trained on human tactics in a variety of scenarios, but that will be cold comfort to dead civilians who happened to get in the way of a hallucinating strategic model.

EDIT: I know I'm not actually addressing anything you said, but you seem to have thought about this a bit, and I was curious about what you thought of this scenario.

My guess is that they didn't answer your question because they had strict instructions not to stray from the script on this topic. Saying the wrong thing could lead to a big PR problem, so I don't expect that people working in this field would be willing to have a candid public discussion even about topics to which they have given a lot of thought. I do expect that they have given the ability of AI to obey orders accurately a lot of thought at least due to practical (if not ethical) concerns.

I mean, I am currently willing to say "the AIs will almost definitely kill civilians but we should build them anyway" because I don't work in defense. However, even I'm a little nervous saying that because one day I might want to. My friends who do work in defense have told me that the people who gave them clearance did investigate their online presence. (My background is in computational biochemistry but I look at what's going on in AI and I feel like nothing else is important in comparison.)

As for cold comfort: I think autonomous weapons are inevitable in the same way that the atom bomb was inevitable. Even if no one wants to see it used, everyone wants to have it because enemies will. However, I don't see a present need for strategic (as opposed to tactical) automation. A computer would have an advantage in battlefield control but strategy takes hours or days or years and so a human's more reliable ability to reason would be more important in that domain.

Once a computer can reason better than a human can, that's the end of the world as we know it. It's also inevitable like the atom bomb.

Of course, totally not used in combat. That's why they strapped the AR-15 to it. AR-15 famously has no use in combat.

I'm not saying they aren't intended to be used in combat. Of course they (or more sophisticated future robots for which they are the prototypes) are. I'm saying that they're not being used in combat right now.