Long Cow is coming

π”Όπ•©π•¦π•€π•šπ•’@lemmy.world to Lemmy Shitpost@lemmy.world – 872 points –
58

In fairness, my brain is reporting the exact same thing, so... it's a tie?

No, you just failed the Turing test

The danger isn't that it's smart, the danger is that it's stupid.

There's an idea about "autistic ai" or something where you give ai an objective like "get a person from point a to b as fast as you can" and the ai goes so fast the g force kills the person but the ai thinks it was a success because you never told it to keep the person alive.

Though I suppose that's more human error. Something we take as a given but a machine will not.

It's called the AI alignment problem, it's fascinating, if you want to dig deeper in the subject I highly recommend 'Robert miles AI safety' channel on YouTube

Computers do what people tell them to do, not what people want.

I read about a military AI that would put its objectives before anything else (like casualties) and do things like select nuclear strikes for all missions that involved destruction of targets. So they adjusted it to allow a human operator to veto strategies, in the simulation this was done via a communications tower. The AI apparently figured out that it could pick the strategy it wanted without veto if it just destroyed the communications tower before it made that selection.

Though take it with a grain of salt because the military denied the story was accurate. Which could mean it wasn't true or it could mean they didn't want the public to believe it was true. Though it does sound a bit too human-like for it to pass my sniff test (an AI wouldn't really care that its strategies get vetoed), but it's an amusing anecdote.

ai thinks

AI's are Mathematic's calculations. If you ordered that execution, are you responsible for the death? It happened because you didn't write instructions well enough; test check against that which doesn't throw life on the scale; or maybe that's just the cheeky excuse to be used when people start dying before enough haven't done so that no one is left A.S. may do it, if your lucky. Doesn't matter. It'll just bump over from any of its thousand T-ultiverses.

21 more...

Or more precise: The danger is that people think it's smart

22 more...

Can anyone prove it's NOT an extra-long cow?

Not based solely on the evidence presented, no.

You forgot a maxlength_cow clause after AI was done. Not exactly AI's fault.

So, does it have a long body or a long neck?

I was doing fine, seeing two cows, right up until I read your comment, and now I see it as some sort of weird giraffe like creature with short legs and a surprising ability to balance even with its neck stretched out that far.

I was doing fine, seeing two cows, right up until I read your comment, and now I see it as some sort of weird giraffe like creature with short legs and a surprising ability to balance even with its neck stretched out that far.

Two cows and two comments or one cow and one comment. Does the content of the comment being the same make it the same comment or does their different storage location and display make them two comments? The digital ship of theseus.

It's obviously just a glitch in the matrix, and you may be the chosen one for noticing.

I got a 504 server error the first time I posted, but apparently it worked anyway.

1 more...

Actually AI wouldn't even recognize either of those as a cow let alone one long, unless it was specifically trained to look for cow's head or ass.

It is because AI is dumb and putting way too much trust in it that it will kill us.