Tesla self-drivers recalled

Planes can take off, fly, and land themselves. Pilots are there for looks and to take over in the event something catastrophic happens. But ultimately the planes can do it all on their own. The root cause of the Korean Airlines crash at SFO a few years ago is because the pilot decide to try and land the plane himself rather than let the computer do it.

Full-self driving is here to stay…. like it or not. And it won’t just be Tesla.

1C8C4338-6BB8-4E90-AFCE-F43BA44CEFA9.jpeg
 
Last edited:
who is to blame in an accident when the driver is not actually driving? i have a hard time wrapping my head around that scenario.
 
who is to blame in an accident when the driver is not actually driving? i have a hard time wrapping my head around that scenario.

The person in the driver seat is responsible for control of the car at all times.

It's only as hard as you want to make it.
 
I remain a "skeptic" about "when" self-driving gets here .. not "if". I feel people are still aiming/envisioning this capability.

Who/what is driving these cars below?

Obviously we aint nearly ready for NO steering wheel/control console .. but ultimately I think that's what OTHERS think is coming/desired someday.

NEWS_181229990_AR_0_MXPFVMLVQXSZ.jpg
12-08-42-yfai_xim18_lounge_mode_highres_cropped_70.jpg
shutterstock_autodrive-1520x855.jpg


My last check on the technology was two years ago, and from what I know, the "trolley" problem wasn't solved.

Do you decide to let the CAR (its AI system) decide to "hit the child that ran in front of it", OR "wreck itself to avoid hitting the child but maybe injure occupants".

Before that decision, is the AI/elecronis even capable of situational awareness (the vision systems, the integration with vehicle systems, etc.) to even make the decision "hurt the pedestrian" or the "occupant".

Maybe its NOT binary, but sometimes it could be that kinda decision - the Trolley Problem. Someone/something is standing by the lever .. hurt five, or hurt one?

1676747297660.png



Then you have "who's fault" in any accident. How is that risk priced/evaluated for insurance/litigation? More questions for me than answers.

Meantime, I am more likely to own an electric before I shuffle off this mortal coil. I NEVER plan to let the car do the driving as today's roads are designed. Practically that means NEVER at my age. I want it under MY control.

Musk may have a point about "recall". Regulators choices may NOT yet distinguish between "software downloads" as fixes vs. take the car into a repair facility to replace a "hard" part as has been the case for 100 years.

I am curious to see that distinction get worked out IF it is not already. It will certainly need to be clear .. I think a software download IS a "recall" event .. the fix could possibly be done, however, while the car sits in your driveway.
 
My last check on the technology was two years ago, and from what I know, the "trolley" problem wasn't solved.

Do you decide to let the CAR (its AI system) decide to "hit the child that ran in front of it", OR "wreck itself to avoid hitting the child but maybe injure occupants".

Before that decision, is the AI/elecronis even capable of situational awareness (the vision systems, the integration with vehicle systems, etc.) to even make the decision "hurt the pedestrian" or the "occupant".

Maybe its NOT binary, but sometimes it could be that kinda decision - the Trolley Problem. Someone/something is standing by the lever .. hurt five, or hurt one?
I'm skeptical it ever will be solved.

Hypothetically, a deer crosses the road and the car can either a) swerve and maybe hit a tree or b) try to stop and maybe hit the deer. For it to even be a trolley problem we're assuming the car can accurately distinguish between "hard" and "soft" obstacles AND predict the outcome of a collision with either one. Sure AI can somewhat accurately tell if something is a deer or not, but does it know if something is a movable or immovable object? And besides, what would be the dividing line, "value" wise?

"A telephone pole will stop and wreck the car but you can plow through a field of bunnies without damage"

I can't imagine a software engineer will ever write a program along those lines.
 
yes I agree -- I do NOT think the "AI" can develop the "situational awareness" to decide who gets hurt (its "sensing" wont be good enough).

lets say we CAN improve its sensing to equal or be better than humans, the rest of "situational awareness" is does it "appreciate" the trolley dilemma?

trolley is gonna kill somebody. five people, or one. what if the "one" is somebody the lever puller knows and all the "five" are strangers?

with a vehicle, the situation could be the stationary "one" in the street with his back turned is the car owner's brother (would the AI even know that?), while the "five" is a crowd of strangers standing on the curb? Somebody is gonna get hit by the car. Who?

we can "scenario" dozens of situations. animate things vs inanimate. people vs animals. elderly vs kids. and so on.

how many people do we have to actually imperil in oder to "teach" (code) the AI how to decide. even if teaching it were possible, do we want it (a machine WE created) to decide "life and death" for us (the creators)?

in any circumstances?

sticking to cars ... i dont think such a decision should be left up to the car to decide. i, too, dont even think it can be done!

so why are we trying this? that's rhetorical. I have a theory for another thread at another time.

:)
 
Last edited:
Here’s my 2 cents: “Full Self Driving”. There’s where it’s currently at. In quotations. It’s a name. Not an actual, true description of its capabilities.
Mr. Musk’s mouth writes a lot of checks his *** can’t cash.
It’ll be after we’re all dead and gone before a car that can drive anywhere, anytime without goofing up happens. If ever. They can’t even get a GPS to work right all the time. Or a self check out. Or a simple iPhone.
Myself, I won’t even have a car that limits my top speed or drives by wire my throttle through a computer. I buy and own my cars. They will bend to my will without question immediately, for better or worse. But, I’m an old curmudgeon and like homeostasis!
 
yes I agree -- I do NOT think the "AI" can develop the "situational awareness" to decide who gets hurt (its "sensing" wont be good enough).

lets say we CAN improve its sensing to equal or be better than humans, the rest of "situational awareness" is does it "appreciate" the trolley dilemma?

trolley is gonna kill somebody. five people, or one. what if the "one" is somebody the lever puller knows and all the "five" are strangers?

with a vehicle, the situation could be the stationary "one" in the street with his back turned is the car owner's brother (would the AI even know that?), while the "five" is a crowd of strangers standing on the curb? Somebody is gonna get hit by the car. Who?

we can "scenario" dozens of situations. animate things vs inanimate. people vs animals. elderly vs kids. and so on.

how many people do we have to actually imperil in oder to "teach" (code) the AI how to decide. even if teaching it were possible, do we want it (a machine WE created) to decide "life and death" for us (the creators)?

in any circumstances?

sticking to cars ... i dont think such a decision should be left up to the car to decide. i, too, dont even think it can be done!

so why are we trying this? that's rhetorical. I have a theory for another thread at another time.

:)
All I can think of is the movie "I Robot" where the robot made the decision who to save in the water.
 
The only way self driving can truly work is if ALL cars are self driving. At that point anything we know as an automobile (funny use of a word, since it never really was “autonomous”) will cease to exist.

I always come to the same question in this topic, ”why?”. Why do I need AI when I have my own intelligence. I honestly don’t see the need for it, nor the desire to let it control me.
 
so why are we trying this? that's rhetorical. I have a theory for another thread at another time.

:)
Too many computer geeks watched Knight Rider as kids.:p

I’m reminded of the scene in Jurassic Park where the driverless Ford Explorer stopped at the wrong moment. Not that we have to worry about man-eating dinosaurs (and the Explorer wasn’t really autonomous), but I can foresee self-driving vehicles crapping out with bad timing, like running in front of trains.


Musk may have a point about "recall". Regulators choices may NOT yet distinguish between "software downloads" as fixes vs. take the car into a repair facility to replace a "hard" part as has been the case for 100 years.
I wonder what Musk would say…er, tweet…if Cadillacs with Super Cruise needed a major software fix and GM brass took exception to the word “recall.”
 
dont know Elon at all .. i imagine tho IF GM publicly said something, it wouldnt be as bombastic as Musk says things.

anyway, if GM spoke out, i'd say Elom would find some way to diss/throw shade at them as "dinosaurs waiting on the asteroid".
 
Back
Top