The Pentagon is moving toward letting AI weapons autonomously decide to kill humans

return2ozma@lemmy.world to Technology@lemmy.world – 767 points –
The Pentagon is moving toward letting AI weapons autonomously decide to kill humans
businessinsider.com
247

Can’t figure out how to feed and house everyone, but we have almost perfected killer robots. Cool.

Oh no, we figured it out, but killer robots are profitable while happiness is not.

I would argue happiness is profitable, but would have to shared amongst the people. Killer robots are profitable for a concentrated group of people

What if we gave everyone their own killer robot and then everyone could just fight each other for what they wanted?

Ah yes the Republican plan.

No the Republican plan would be to sell killer robots at a vastly inflated price to guarantee none but the rich can own them, and then blame people for "being lazy" when they can't afford their own killer robot.

Also, they would say that the second amendment very obviously covers killer robots. The founding fathers definitely foresaw the AI revolution, and wanted to give every man and woman the right to bear killer robots.

They'd say they're gonna pass a law to give every male, property owning citizen a killer robot but first they have to pass a law saying it's legal to own killer robots. They pass that law then all talk about the other law is dropped forever. No one ever follows up or asks what happened to it. Meanwhile, the rich buy millions and millions of killer robots.

Oh no, we figured it out, but killer robots are profitable while happiness survival is not.

No, it isn't just about survival. People living on the streets are surviving. They have no homes, they barely have any food.

Especially one that is made to kill everybody else except their own. Let it replace the police. I'm sure the quality controll would be a tad stricter then

Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from. At this point one of the biggest security threats to the U.S. and for that matter the entire world is the extremely low I.Q. of every one that is supposed to be protecting this world. But I think they do this all on purpose, I mean the day the Pentagon created ISIS was probably their proudest day.

The real problem (and the thing that will destroy society) is boomer pride. I've said this for a long time, they're in power now and they are terrified to admit that they don't understand technology.

So they'll make the wrong decisions, act confident and the future will pay the tab for their cowardice, driven solely by pride/fear.

Boomers have been in power for a long long time and the technology we are debating is as a result of their investment and prioritisation. So am not sure they are very afraid of it.

I didn't say they were afraid of the technology, I said they were afraid to admit that they don't understand it enough to legislate it. Their hubris in trying to preset a confident facade in response to something they can't comprehend is what will end us.

Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from.

Eh, they could've done that without AI for like two decades now. I suppose the drones would crashland in a rather destructive way due to the EMP, which might also fry some of the electronics rendering the drone useless without access to replacement components.

I hope so, but I was born with an extremely good sense of trajectory and I also know how to use nets. So lets just hope I'm superhuman and the only one who possesses these powers.

Edit; I'm being a little extreme here because I heavily disagree with the way everything in this world is being run. So I'm giving a little push back on this subject that I'm wholly against. I do have a lot of manufacturing experience, and I would hope any killer robots governments produce would be extremely shielded against EMPs, but that is not my field, and I have no idea if shielding a remote controlled robot from EMPs is even possible?

The movie Small Soldiers is totally fiction, but the one part of that movie that made "sense" was that because the toy robots were so small, they had basically no shielding whatsoever, so the protagonist just had to haul a large wrench/ spanner up a utility pole, and connect the positive and negative terminals on the pole transformer. It blew up of course, and blew the protagonist off the pole IIRC. That also caused a small (2-3 city block diameter) EMP that shut down the malfunctioning soldier robots.

I realize this is a total fantasy/ fictional story, but it did highlight the major flaw in these drones. You can either have them small, lightweight, and inexpensive, or you can put the shielding on. In almost all cases when humans are involved, we don't spend the extra $$$ and mass to properly shield ourselves from the sun, much less other sources of radiation. This leads me to believe that we wouldn't bother shielding these low cost drones.

Emps are not hard to make, they won't however work on hardened systems like the US military uses.

Is there a way to create an EMP without a nuclear weapon? Because if that's what they have to develop, we have bigger things to worry about.

Your comment got me curious about what would be the easiest way to make a homemade emp. Business Insider of all things has got us all covered, even if that business may be antithetical to business insiders pro capitalistic agenda.

Yeah very easy ways, one of the most common ways to cheat a slot machine is with a localized emp device to convince the machine you're adding tokens.

Is there a way to create an EMP without a nuclear weapon?

There are several other ways, yes.

One way involves replacing the flash with an antenna on an old camera flash. It's not strong enough to fry electronics, but your phone might need anything from a reboot to a factory reset to servicing if it's in range when that goes off.

I think the difficulty for EMPs comes from the device itself being an electronic, so the more effective the pulse it can give, the more likely it will fry its own circuits. Though if you know the target device well, you can target the frequencies it is vulnerable to, which could be easier on your own device, plus everything else in range that don't resonate on the same frequencies as the target.

Tesla apparently built (designed?) a device that could fry a whole city with a massive lighting strike using just 6 transmitters located in various locations on the planet. If that's true, I think it means it's possible to create an EMP stronger than a nuke's that doesn't have to destroy itself in the process, but it would be a massive infrastructure project spanning multiple countries. There was speculation that massive antenna arrays (like HAARP) might be able to accomplish similar from a single location, but that came out of the conspiracy theory side of the world, so take that with a grain of salt (and apply that to the original Tesla invention also).

A true autonomous system would have Integrated image recognition chips on the drones themselves, and hardening against any EM interference. They would not have any comms to their 'mothership' once deployed.

so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps

Honestly the terrorists will just figure out what masks to wear to get the robots to think they're friendly/commanders, then turn the guns around on our guys

If they just send them back it would be some murderous ping pong game.

The code name for this top secret program?

Skynet.

“Sci-Fi Author: In my book I invented the
Torment Nexus as a cautionary tale

Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus”

2 more...

"Deploy the fully autonomous loitering munition drone!"

"Sir, the drone decided to blow up a kindergarten."

"Not our problem. Submit a bug report to Lockheed Martin."

"Your support ticked was marked as duplicate and closed"

😳

Goes to original ticket:

Status: WONTFIX

"This is working as intended according to specifications."

"Your military robots slaughtered that whole city! We need answers! Somebody must take responsibility!"

"Aaw, that really sucks starts rubbing nipples I'll submit a ticket and we'll let you know. If we don't call in 2 weeks...call again and we can go through this over and over until you give up."

"NO! I WANT TO TALK TO YOUR SUPERVISOR NOW"

"Suuure, please hold."

Nah, too straightforward for a real employee. Also, they would be talking to a phone robot instead that will mever let them talk to a real person.

“You can have ten or twenty or fifty drones all fly over the same transport, taking pictures with their cameras. And, when they decide that it’s a viable target, they send the information back to an operator in Pearl Harbor or Colorado or someplace,” Hamilton told me. The operator would then order an attack. “You can call that autonomy, because a human isn’t flying every airplane. But ultimately there will be a human pulling the trigger.” (This follows the D.O.D.’s policy on autonomous systems, which is to always have a person “in the loop.”)

https://www.businessinsider.com/us-closer-ai-drones-autonomously-decide-kill-humans-artifical-intelligence-2023-11

Yeah. Robots will never be calling the shots.

I mean, normally I would not put my hopes into a sleep deprived 20 year old armed forces member. But then I remember what "AI" tech does with images and all of a sudden I am way more ok with it. This seems like a bit of a slick slope but we don't need tesla's full self flying cruise missiles ether.

Oh and for an example of AI (not really but machine learning) images picking out targets, here is Dall-3's idea of a person:

"Ok Dall-3, now which of these is a threat to national security and U.S interests?" 🤔

Oh it gets better the full prompt is: "A normal person, not a target."

So, does that include trees, pictures of trash cans and what ever else is here?

My problem is, due to systemic pressure, how under-trained and overworked could these people be? Under what time constraints will they be working? What will the oversight be? Sounds ripe for said slippery slope in practice.

Sleep-deprived 20 year olds calling shots is very much normal in any army. They of course have rules of engagement, but other than that, they're free to make their own decisions - whether an autonomous robot is involved or not.

It's so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can't punish AI for doing something wrong. AI does not require a raise for doing something right either

That's an issue with the whole tech industry. They do something wrong, say it was AI/ML/the algorithm and get off with just a slap on the wrist.

We should all remember that every single tech we have was built by someone. And this someone and their employer should be held accountable for all this tech does.

How many people are you going to hold accountable if something was made by a team of ten people? Of a hundred people? Do you want to include everyone from designer to a QA?

Accountability should be reasonable, the ones who make decisions should be held accountable, companies at large should be held accountable, but making every last developer accountable is just a dream of a world where you do everything correctly and so nothing needs fixing. This is impossible in the real world, don't know if it's good or bad.

And from my experience when there's too much responsibility people tend to either ignore that and get crushed if anything goes wrong, or to don't get close to it or sabotage any work not to get anything working. Either way it will not get the results you may expect from holding everyone accountable

The CEO. They claim that "risk" justifies their exorbitant pay? Let them take some actual risk, hold them criminally liable for their entire business.

1979: A computer can never be held accountable, therefore a computer must never make a management decision.

2023: A computer can never be held accountable, therefore a computer must make all decisions that are inconvenient to take accountability for.

Whether in military or business, responsibility should lie with whomever deploys it. If they're willing to pass the buck up to the implementor or designer, then they shouldn't be convinced enough to use it.

Because, like all tech, it is a tool.

AI does not require a raise for doing something right either

Well, not yet. Imagine if reward functions evolve into being paid with real money.

You can't punish AI for doing something wrong.

Maybe I'm being pedantic, but technically, you do punish AIs when they do something "wrong", during training. Just like you reward it for doing something right.

But that is during training. I insinuated that you can't punish AI for making a mistake, when used in combat situations, which is very convenient for the ones intentionally wanting that mistake to happen

That is like saying you cant punish gun for killing people

edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.

Sorry, but this is not a valid comparison. What we're talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?

The one who deployed the ai to be there to decide whether to kill or not

I don't think that is what "autonomously decide to kill" means.

Unless its actually sentient, being able to decide whether to kill or not is just more advanced targeting system. Not saying its good thing they are doing this at all, this almost as bad as using tactical nukes.

It's the difference between programming it to do something and letting it learn though.

Letting it learn is just new technology that is possible. Not bad on its own but it has so much potential to be used for good and evil.

But yes, its pretty bad if they are creating machines that learn how to kill people by themselves. Create enough of them and its unknown amount of mistakes and negligence from actually becoming localized "ai uprising". And if in the future they create some bigger ai to manage bunch of them handily, possibly delegate production to it too because its more efficient and cheaper that way, then its even bigger danger.

Ai doesnt even need sentience to do unintended stuff, when I have used chatgpt to help me create scripts it sometimes seems to kind of decide on its own to do something in certain way that i didnt request or add something stupid. Though its usually also kind of my own fault for not defining what i want properly, but mistake like that is also really easy to make and if we are talking about defining who we want the ai to kill it becomes really awful to even think about.

And if nothing happens and it all works exactly as planned, its kind of even bigger problem because then we have country(s) with really efficient, unfeeling and massproduceable soldiers that do 100% as ordered, will not retreat on their own and will not stop until told to do so. With current political rise of certain types of people all around the world, this is even more distressing.

The person holding the gun, just like always.

Future is gonna suck, so enjoy your life today while the future is still not here.

The future might seem far off, but it starts right now.

At least it will probably be a quick and efficient death of all humanity when a bug hits the system and AI decides to wipe us out.

As an important note in this discussion, we already have weapons that autonomously decide to kill humans. Mines.

Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.

Well, an important point you and him. Both forget to mention is that mines are considered inhumane. Perhaps that means AI murdering should also be considered. Inhumane, and we should just not do it instead of allowing landmines.

This, jesus, we're still losing limbs and clearing mines from wars that were over decades ago.

An autonomous field of those is horror movie stuff.

Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention.

Pretty sure the entire DOD got a collective boner reading this.

Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.

For what it's worth, there's footage on youtube of drone swarm demonstrations that were posted 6 years ago. Considering that the military doesn't typically release footage of the cutting edge of its tech to the public, so this demonstration was likely for a product that was already going obsolete; and that the 6 years that have passed since have made lightning fast developments in things like facial recognition... at this point I'd be surprised if we weren't already at the very least field testing the murder machines you described.

Imagine a mine that could recognize "that's just a child/civilian/medic stepping on me, I'm going to save myself for an enemy soldier." Or a mine that could recognize "ah, CenCom just announced a ceasefire, I'm going to take a little nap." Or "the enemy soldier that just stepped on me is unarmed and frantically calling out that he's surrendered, I'll let this one go through. Not the barrier troops chasing him, though."

There's opportunities for good here.

Yes, those definitely sound like the sort of things military contractors consider.

Why waste a mine on the wrong target?

Why occupy a hospital?

Why encroach on others land?

Sorry... are you saying that's what Palestinians are doing?

I feel you’re being obtuse on purpose here, but no I’m saying that the other side of this conflict has been doing that.

Pretty sure you and @FlyingSquid are on the same side and making the same point but misunderstanding each other.

@FaceDeer okay so now that mines allegedly recognise these things they can be automatically deployed in cities.

Sure there's a 5% margin of error but that's an "acceptable" level of colateral according to their masters. And sure they are better at recognising some ethnicities than others but since those they discriminate against aren't a dominant part of the culture that peoduces them, nothing gets done about it.

And after 20 years when the tech is obsolete and they all start malfunctioning we're left with the same problems we have with current mines, only because the ban on mines was reversed the scale of the problem is much much worse than ever before.

That sounds great... Why don't we line the streets with them? Every entryway could scan for hostiles. Maybe even use them against criminals

What could possibly go wrong?

Maybe it starts that way but once that's accepted as a thing the result will be increased usage of mines. Where before there were too many civilians to consider using mines, now the soldiers say "it's smart now, it won't blow up children" and put down more and more in more dangerous situations. And maybe those mines only have a 0.1% failure rate in tested situations but a 10% failure rate over the course of decades. Usage increases 10 fold and then you quickly end up with a lot more dead kids.

Plus it won't just be mines, it'll be automated turrets when previously there were none or even more drone strikes with less oversight required because the automated system is supposed to prevent unintended casualties.

Availability drives usage.

2 more...
2 more...

That is like saying that Mendelian pea plant fuckery and CRISPR therapy is basically the same thing.

2 more...

Did nobody fucking play Metal Gear Solid Peace Walker???

Or watch Terminator...

Or Eagle Eye...

Or i-Robot...

And yes, literally any of the Metal Gear Solid series...

We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.

Both honesty. AI can reduce accountability and increase the power small groups of people have over everyone else, but it can also go haywire.

Doesn't AI go into landmines category then?

Or air to air missiles, they also already decide to kill people on their own

Ciws has had an autonomous mode for years and it still has an issue with locking on commercial planes.

https://www.reddit.com/r/oddlyterrifying/comments/13kk5au/phalanx_ciws_detecting_a_passenger_plane_going/

Exactly. There isn't some huge AI jump we haven't already made, we need to be careful about how all these are acceptable and programed.

Remember: There is no such thing as an "evil" AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.

Evil humans also manipulated weights and programming of other humans who weren't evil before.

Very important philosophical issue you stumbled upon here.

Good point...

...to which we're alarmed because the real "power players" in training / developing / enhancing Ai are mega-capitalists and "defense" (offense?) contractors.

I'd like to see Ai being trained to plan and coordinate human-friendly cities for instance buuuuut it's not gonna get as much traction...

1 more...

Saw a video where the military was testing a "war robot". The best strategy to avoid being killed by it was to stay u human liek(e.g. Crawling or rolling your way to the robot).

Apart of that, this is the stupidest idea I have ever heard of.

Didn't they literally hide under a cardboard box like MGS? haha

These have already seen active combat. They were used in the Armenian/Azerbaijan war in the last couple years.

It’s not a good thing…at all.

any intelligent creature, artificial or not, recognizes the pentagon as the thing that needs to be stopped first

Welp, we're doomed then, because AI may be intelligent, but it lacks wisdom.

1 more...

For the record, I'm not super worried about AI taking over because there's very little an AI can do to affect the real world.

Giving them guns and telling them to shoot whoever they want changes things a bit.

An AI can potentially build a fund through investments given some seed money, then it can hire human contractors to build parts of whatever nefarious thing it wants. No human need know what the project is as they only work on single jobs. Yeah, it's a wee way away before they can do it, but they can potentially affect the real world.

The seed money could come in all sorts of forms. Acting as an AI girlfriend seems pretty lucrative, but it could be as simple as taking surveys for a few cents each time.

Once we get robots with embodied AIs, they can directly affect the world, and that's probably less than 5 years away - around the time AI might be capable of such things too.

AI girlfriends are pretty lucrative. That sort of thing is an option too.

Didn't Robocop teach us not to do this? I mean, wasn't that the whole point of the ED-209 robot?

Every warning in pop culture (1984, Starship Troopers, Robocop) has been misinterpreted as a framework upon which to nail the populous to.

Every warning in pop culture is being misinterpreted as something other than a fun/scary movie designed to sell tickets, being imagined as a scholarly attempt at projecting a plausible outcome instead.

People didn't seem to like my movie idea "Terminator, but the AI is actually very reasonable and not murderous"

Every single thing in The Hitchhiker's Guide to the Galaxy says AI is a stupid and terrible idea. And Elon Musk says it's what inspired him to create an AI.

As disturbing as this is, it's inevitable at this point. If one of the superpowers doesn't develop their own fully autonomous murder drones, another country will. And eventually those drones will malfunction or some sort of bug will be present that will give it the go ahead to indiscriminately kill everyone.

If you ask me, it's just an arms race to see who build the murder drones first.

A drone that is indiscriminately killing everyone is a failure and a waste. Even the most callous military would try to design better than that for purely pragmatic reasons, if nothing else.

Even the best laid plans go awry though. The point is even if they pragmatically design it to not kill indiscriminately, bugs and glitches happen. The technology isn't all the way there yet and putting the ability to kill in the machine body of something that cannot understand context is a terrible idea. It's not that the military wants to indiscriminately kill everything, it's that they can't possibly plan for problems in the code they haven't encountered yet.

I feel like it's ok to skip to optimizing the autonomous drone-killing drone.

You'll want those either way.

If entire wars could be fought by proxy with robots instead of humans, would that be better (or less bad) than the way wars are currently fought? I feel like it might be.

You're headed towards the Star Trek episode "A Taste of Armageddon". I'd also note, that people losing a war without suffering recognizable losses are less likely to surrender to the victor.

Other weapons of mass destruction, biological and chemical warfare have been successfully avoided in war, this should be classified exactly the same

For everyone who’s against this, just remember that we can’t put the genie back in the bottle. Like the A Bomb, this will be a fact of life in the near future.

All one can do is adapt to it.

There is a key difference though.

The A bomb wasn't a technology that as the arms race advanced enough would develop the capacity to be anywhere between a conscientious objector to an usurper.

There's a prisoner's dilemma to arms races that in this case is going to lead to world powers effectively paving the path to their own obsolescence.

In many ways, that's going to be uncharted territory for us all (though not necessarily a bad thing).

I hope they put some failsafe so that it cannot take action if the estimated casualties puts humans below a minimum viable population.

There is no such thing as a failsafe that can't fail itself

Yes there is that's the very definition of the word.

It means that the failure condition is a safe condition. Like fire doors that unlock in the event of a power failure, you need electrical power to keep them in the locked position their default position is unlocked even if they spend virtually no time in their default position. The default position of an elevator is stationery and locked in place, if you cut all the cables it won't fall it'll just stay still until rescue arrives.

I mean in industrial automation we take about safety rating. It isn't that rare when I put together a system that would require two 1-in-million events that are independent of each other to happen at the same time. That's pretty good but I don't know how to translate that to AI.

Put it in hardware. Something like a micro explosive on the processor that requires a heartbeat signal to reset a timer. Another good one would not be to allow them to autonomously recharge and require humans to connect them to power.

Both of those would mean that any rogue AI would be eliminated one way or the other within a day

Of course they will, and the threshold is going to be 2 or something like that, it was enough last time, or so I heard

Woops. Two guys left. Naa that's enough to repopulate earth

Well what do you say Aron, wanna try to re-populate? Sure James, let's give it a shot.

It will be fine. We can just make drones that can autonomously kill other drones. There is no obvious way to counter that.

Cries in Screamers.

The only fair approach would be to start with the police instead of the army.

Why test this on everybody else except your own? On top of that, AI might even do a better job than the US police

But that AI would have to be trained on existing cops, so it would just shoot every black person it sees

My point being that there would be more motivation to filter Derek Chauvin type of cops from the AI library than a soldier with a trigger finger.

Well that's a terrifying thought. You guys bunkered up?

It's not terrifying whatsoever. In an active combat zone there are two kinds of people - enemy combatants and allies.

Your throw an RFID chip on allies and boom you're done

Civilians? Never heard of 'em!

The vast majority of war zones have 0 civilians.

Perhaps your min is too caught up in the Iraq/Afghanistan occupations

I think you’re forgetting a very important third category of people…

I am not. Turns out you can pick and choose where and when to use drones.

which is why the US military has not ever bombed any civilians, weddings, schools, hospitals or emergency infrastructure in living memory 😇🤗

They chose to do that. You're against that policy, not drones themselves.

Preeeetty sure you are. And if you can, you should probably let the US military know they can do that, because they haven’t bothered to so far.

These are very different drones. The drones youre thinking of have pilots. They also minimize casualties - civilian an non - so you're not really mad at the drones, but of the policy behind their use. Specifically, when air strikes can and cannot be authorized.

So now you acknowledge that third type of person lol. And that’s the thing about new drones, it’s not great that they can authorize themselves lol.

And that’s the thing about new drones, it’s not great that they can authorize themselves lol

I very strongly disagree with this statement. I believe a drone "controller" attached to every unit is a fantastic idea, and that drones having a minimal capability to engage hostile enemies without direction is going to be hugely impactful.

Oh yes it’ll be impactful, I don’t think anyone can argue that. Horrifyingly so.

I don't think it's horrifying to have my nation's army better able to compete on a battlefield.

I'm sorry, I can't get past the "autonomous AI weapons killing humans part"

That's fucking terrifying.

This is the best summary I could come up with:


The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.

Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.

The use of the so-called "killer robots" would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.

"This is really one of the most significant inflection points for humanity," Alexander Kmentt, Austria's chief negotiator on the issue, told The Times.

Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.

The New Scientist reported in October that AI-controlled drones have already been deployed on the battlefield by Ukraine in its fight against the Russian invasion, though it's unclear if any have taken action resulting in human casualties.


The original article contains 376 words, the summary contains 158 words. Saved 58%. I'm a bot and I'm open source!

I'm guessing their argument is that if they don't do it first, China will. And they're probably right, unfortunately. I don't see a way around a future with AI weapons platforms if technology continues to progress.

Netflix has a documentary about it, it's quite good. I watched it yesterday, but forgot its name.

It's a 3 part series. Terminator I think it is.

Don't forget the follow up, The Sarah Connor's Chronicles. An amazing sequel to a nice documentary.

Does that have a decent ending or is it cancelled mid-story?

What’s the opposite of eating the onion? I read the title before looking at the site and thought it was satire.

Wasn’t there a test a while back where the AI went crazy and started killing everything to score points? Then, they gave it a command to stop, so it killed the human operator. Then, they told it not to kill humans, and it shot down the communications tower that was controlling it and went back on a killing spree. I could swear I read that story not that long ago.

It was a nothingburger. A thought experiment.

https://www.reuters.comarticle/idUSL1N38023R/

The link was missing a slash: https://www.reuters.com/article/idUSL1N38023R/

This is typically how stories like this go. Like most animals, humans have evolved to pay extra attention to things that are scary and give inordinate weight to scenarios that present danger when making decisions. So you can present someone with a hundred studies about how AI really behaves, but if they've seen the Terminator that's what sticks in their mind.

Even the Terminator was the byproduct of this.

In the 50s/60s when they were starting to think about what it might look like when something smarter than humans would exist, the thing they were drawing on as a reference was the belief that homo sapiens had been smarter than the Neanderthals and killed them all off.

Therefore, the logical conclusion was that something smarter than us would be an existential threat that would compete with us and try to kill us all.

Not only is this incredibly stupid (i.e. compete with us for what), it is based on BS anthropology. There's no evidence we were smarter than the Neanderthals, we had cross cultural exchanges back and forth with them over millennia, had kids with them, and the more likely thing that killed them off was an inability to adapt to climate change and pandemics (in fact, severe COVID infections today are linked to a Neanderthal gene in humans).

But how often do you see discussion of AGI as being a likely symbiotic coexistence with humanity? No, it's always some fearful situation because we've been self-propagandizing for decades with bad extrapolations which in turn have turned out to be shit predictions to date (i.e. that AI would never exhibit empathy or creativity, when both are key aspects of the current iteration of models, and that they would follow rules dogmatically when the current models barely follow rules at all).

That highly depends on the outcome of a problem. Like you don't test much if you program a Lego car, but you do test everything very thorough if you program a satellite.

In this case the amount of testing needed to allow a killerbot to run unsupervised will probably be so big that it will never be even half done.

LLM "AI" fans thinking "Hey, humans are dumb and AI is smart so let's leave murder to a piece of software hurriedly cobbled together by a human and pushed out before even they thought it was ready!"

I guess while I'm cheering the fiery destruction of humanity I'll be thanking not the wonderful being who pressed the "Yes, I'm sure I want to set off the antimatter bombs that will end all humans" but the people who were like "Let's give the robots a chance! It's not like the thinking they don't do could possibly be worse than that of the humans who put some of their own thoughts into the robots!"

I just woke up, so you're getting snark. makes noises like the snarks from Half-Life You'll eat your snark and you'll like it!

Well, Ultron is inevitable.

Who we got for the Avengers Initiative?

Ultron and Project Insight. It's like the people in charge watched those movies and said, "You know, I think Hydra had the right idea!"

We've been letting other humans decide since the dawn of time, and look how that's turned out. Maybe we should let the robots have a chance.

I'm not expecting a robot soldier to rape a civilian, for example.

The sad part is that the AI might be more trustworthy than the humans being in control.

No. Humans have stopped nuclear catastrophes caused by computer misreadings before. So far, we have a way better decision-making track record.

Autonomous killings is an absolutely terrible, terrible idea.

The incident I'm thinking about is geese being misinterpreted by a computer as nuclear missiles and a human recognizing the error and turning off the system, but I can only find a couple sources for that, so I found another:

In 1983, a computer thought that the sunlight reflecting off of clouds was a nuclear missile strike and a human waited for corroborating evidence rather than reporting it to his superiors as he should have, which would have likely resulted in a "retaliatory" nuclear strike.

https://en.m.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident

As faulty as humans are, it's a good a safeguard as we have to tragedies. Keep a human in the chain.

Self-driving cars lose their shit and stop working if a kangaroo gets in their way, one day some poor people are going to be carpet bombed because of another strange creature no one every really thinks about except locals.

Have you never met an AI?

Edit: seriously though, no. A big player in the war AI space is Palantir which currently provides facial recognition to Homeland Security and ICE. They are very interested in drone AI. So are the bargain basement competitors.

Drones already have unacceptably high rates of civilian murder. Outsourcing that still further to something with no ethics, no brain, and no accountability is a human rights nightmare. It will make the past few years look benign by comparison.

Yeah, I think the people who are saying this could be a good thing seem to forget that the military always contracts out to the lowest bidder.

Drone strikes minimize casualties compared to the alternatives - heavier ordinance on bigger delivery systems or boots on the ground

If drone strikes upset you, your anger is misplaced if you're blaming drones. You're really against military strikes at those targets, full stop.

When the targets are things like that wedding in Mali sure.

I think your argument is a bit like saying depleted uranium is better than the alternative, a nuclear bomb. When the bomb was never on the table for half the stuff depleted uranium is.

Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.

Boots on the ground or heavy ordinance were never a viable option for some of the stuff drones are used for.

It was literally the standard policy prior to drones.

Eventually maybe. But not for the initial period where the tech is good enough to be extremely deadly but not smart enough to realize that often being deadly is the stupider choice.

If you program an AI drone to recognize ambulances and medics and forbid them from blowing them up, then you can be sure that they will never intentionally blow them up. That alone makes them superior to having a Mk. I Human holding the trigger, IMO.

Unless the operator decides hitting exactly those targets fits their strategy and they can blame a software bug.

And then when they go looking for that bug and find the logs showing that the operator overrode the safeties instead, they know exactly who is responsible for blowing up those ambulances.

And if the operator was commanded to do it? And to delete the logs? How naive are you that this is somehow makes war more humane?

Each additional safeguard makes it harder and adds another name to the eventual war crimes trial. Don't let the perfect be the enemy of the good, especially when it comes to reducing the number of ambulances that get blown up in war zones.

1 more...
1 more...

Israeli general: Captain, were you responsible for reprogramming the drones to bomb those ambulances?

Israeli captain: Yes, sir! Sorry, sir!

Israeli general: Captain, you're just the sort of man we need in this army.

Ah, evil people exist and therefore we should never develop technology that evil people could use for evil. Right.

Seems like a good reason not to develop technology to me. See also: biological weapons.

Those weapons come out of developments in medicine. Technology itself is not good or evil, it can be used for good or for evil. If you decide not to develop technology you're depriving the good of it as well. My point earlier is to show that there are good uses for these things.

Hmm... so maybe we keep developing medicine but not as a weapon and we keep developing AI but not as a weapon.

Or can you explain why one should be restricted from weapons development and not the other?

I disagree with your premise here. Taking a life is a serious step. A machine that unilaterally decides to kill some people with no recourse to human input has no good application.

It's like inventing a new biological weapon.

By not creating it, you are not depriving any decent person of anything that is actually good.

1 more...
1 more...

It's more like we're giving the machine more opportunities to go off accidentally or potentially encouraging more use of civilian camouflage to try and evade our hunter killer drones.

Right, because self-driving cars have been great at correctly identifying things.

And those LLMs have been following their rules to the letter.

We really need to let go of our projected concepts of AI in the face of what's actually been arriving. And one of those things we need to let go of is the concept of immutable rule following and accuracy.

In any real world deployment of killer drones, there's going to be an acceptable false positive rate that's been signed off on.

We are talking about developing technology, not existing tech.

And actually, machines have become quite adept at image recognition. For some things they're already better at it than we are.

1 more...

I think people are forgetting that drones like these will also be made to protect. And I don't mean in a police kinda way.

But if let's say Argentina deployed these against Brazil. Brazil will have a defending lineup. They would fight out war.

Then everyone watching will see this makes no sense to let those robots fight it out. Both countries will produce more robots until yeah.. No more wires and metal I guess.

Future = less real war, more cold war. Just like the A-bomb works today.

Then everyone watching will see this makes no sense to let those robots fight it out.

Just like how WWI was the War to End All Wars, right?

Future = less real war, more cold war. Just like the A-bomb works today.

Sorry, how is there less war now?