Fired Tesla Employee Posts New Video of Full Self-Driving Running Red Light

cyu@sh.itjust.works to Technology@lemmy.ml – 431 points –
Fired Tesla Employee Posts New Video Of Full Self-Driving Running A Red Light
jalopnik.com

cross-posted from: https://derp.foo/post/81940

There is a discussion on Hacker News, but feel free to comment here as well.

98

We need to build special roads so self driving cars can navigate properly.

You could even connect self driving cars together, by letting the front car pull them the others could save their batteries.

And with these "trains" of self driving cars pulling each other, you wouldn't have to build the self driving car roads very wide, they could just run on narrow "tracks" for the wheels.

Then we'd have more space for human stuff instead of car stuff like roads and parking lots everywhere.

He's done it again. Elon Musk is a god damn genius.

Would you consider to also make an underground version?

Why not rebrand subways as the 𝕏-loop and convince Musk it was his idea. Then he can beg billions in funding for them.

It is amazing how copos try to reinvent shittier versions of trains.

This reminded me so much of this !

Adam something now making videos for the new York times? Good for him.

1 more...

JFC that's frightening. It blew that red at about 30mph, didn't even really slow down except for the curve.

Because the car didn't recognize it as a red light, probably due to all the green lights that were facing a similar direction.

The issue is not the speed at which it took the turn, but that it cannot distinguish which traffic lights are for the lane the car is in.

If you've watched any of their recent AI talks, they talk a lot about these unusual and complex intersections. Lane mappings in complexe intersections being one of the hardest problems. Currently they're taking data from numerous cars to reconstruct intersections like this to then turn into a simulation and train it so it learns more and more complex things.

There really are only 2 options.

Solve this with vision and AI, or solve this with HD maps.

But it has to be solved.

If it sees red and green, it should take the safe option and stop until it is sure or the driver takes over.

If it's unsure, but for whatever reason this failed, it seemed sure.

I've had the car slow in unsure situations before so it can and does.

It just got this one very wrong for some reason

2 more...
2 more...

Man hackernews is full of people criticizing the poster saying that he should have disengaged the system so it learns completely missing the point that FSD should not be considered safe.

Eech. The comments under the original tweet are rancid. Twitter is really Musk town now.

To be fair, it's a messy intersection with lots of traffic lights. I'm struggling to understand which one is the one to look at. However I'm finding hard to believe Tesla actually has the skills to unbeta this shit hole.

That’s the thing, if FSD isn’t advanced enough to handle tricky intersections no matter the circumstance, then it’s not ready for deployment.

So if you don't understand the lights you blast through, or do you slow down?

if decision.confidence < CONFIDENCE_THRESHOLD:
    wheel.give_to(JESUS)

Definitely an unusual intersection where one street looks like a diagonal merge into another, but the stoplight placement is bizarre as the driver can see two different light directions at the same time coming up on the approach.

So sick of shit like this getting posted. Of course the software is not perfect. There are so many warnings about it not being independent of driver intervention it's crazy. Yet here we are with the entire internet hating on Musk so much that we have to tear down the evolution of self driving cars, which is arguably the most complicated computing and programming problem in history. Bring on the downvotes but for the record, I think Musk is a douchebag but can separately appreciate the effort involved in the herculean task of programming cars that drive themselves.

Yet so many Tesla owners still think its self-driving. Maybe because that's how its been marketed for years.

https://files.catbox.moe/vcwliq.png

Do you get it, now?

Oh I definitely get it - doesn't matter how you think it's marketed. Only an idiot would think it could completely drive independent of human input.

My uber was a Tesla once. The guy was convinced he could text and drive just fine on the highway. Looked at me like I was a total Karen. The average person is an idiot. Thats the crux of the issue.

Ok i am all in on the Elon hate, but I know you're lying because I have a friend with full self driving and if you so much as look at the person in the passenger seat the car flips a shit and will even go as far as to disable full self driving for the rest of your trip.

Looks like cameras were introduced in 2021. When was my incident?

Again, the in cabin camera monitors you and makes sure you are paying attention to the road. Otherwise, how can Tesla be held responsible for idiots? Remember that the idea is to DEVELOP self driving software so that the roads can be SAFER (not 100% safe but safer) from idiot humans.

That's the first time you mentioned a camera; there is no "again".

But what does that change if its still being advertised as self-driving?

People are idiots. They rationalize the safety features e.g. " oh, its not yet approved by regulators so they must use the driver-cam but Musk has said its self-driving so its okay".

Remember that the idea is to DEVELOP self driving software so that the roads can be SAFER (not 100% safe but safer) from idiot humans.

Umm what? The idea is to turn a profit for shareholders. Musk is not Tony Stark. In fact Tesla is now behind other self-driving platforms because Musk does not prioritize safety.
https://www.businessinsider.com/elon-musk-demanded-cameras-over-radar-in-self-driving-cars-nytimes-2021-12

4 more...
4 more...
4 more...
4 more...
4 more...

I think it's not a case of the software not being perfect, but that they are actively breaking in live environments, where it is amazingly critical that they not break. If that is an issue, then they need to get to such a level of confidence were they don't need to worry about breaking, which tesla is apparently not at currently.

Yet the software disengages if it detects that you are not paying attention. So in reality, when it's engaged there are actually TWO drivers. How can anyone argue that's less safe than ONE driver?

4 more...

He was prolly fired bc he couldn't program the thing to stop at a red light.

Also who knows when this footage was taken or if it was just test footage that has since been ironed out.

Nice try Elon.

But for legit, do you have information to refute what I said?

Or you could - oh, I don't know - read the article you are commenting on... it says he was a test operator and not a programmer.

Oh lol well then yeah, this is like releasing footage of a half baked game and claiming its buggy. Of course it is.

That's not how burden of proof works.

Do you have information to back up what you said?

Just the fact that you think programming a car to stop at a red light is a one man task is enough to show how much you know about what you're talking about

If you had read the article you'd know his job was "advanced driver assistance systems test operator". His job was to test the cars, not program them.