yes I agree -- I do NOT think the "AI" can develop the "situational awareness" to decide who gets hurt (its "sensing" wont be good enough).
lets say we CAN improve its sensing to equal or be better than humans, the rest of "situational awareness" is does it "appreciate" the trolley dilemma?
trolley is gonna kill somebody. five people, or one. what if the "one" is somebody the lever puller knows and all the "five" are strangers?
with a vehicle, the situation could be the stationary "one" in the street with his back turned is the car owner's brother (would the AI even know that?), while the "five" is a crowd of strangers standing on the curb? Somebody is gonna get hit by the car. Who?
we can "scenario" dozens of situations. animate things vs inanimate. people vs animals. elderly vs kids. and so on.
how many people do we have to actually imperil in oder to "teach" (code) the AI how to decide. even if teaching it were possible, do we want it (a machine WE created) to decide "life and death" for us (the creators)?
in any circumstances?
sticking to cars ... i dont think such a decision should be left up to the car to decide. i, too, dont even think it can be done!
so why are we trying this? that's rhetorical. I have a theory for another thread at another time.