|
|
Now I want to see how they show why their little monster did what it did, how it will react in other situations and how to 'cure' it from its delusions.
The AI fans always forget that even the dumbest human driver has a few million years of evolution behind him. How can they think to play better in the same league with x hours of training and 'testing'?
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
They haven't released the video footage, but the reports say it was her fault - she walked out in front of it so close than nothing could have prevented the collision, human or robotic driver: Tempe police chief: Uber 'likely' not at fault in fatal self-driving car crash - Business Insider[^]
And you can be sure that there is more telemetry and recorded info in this accident than in any previous death-by-driving case, with the possible exception of Ayrton Senna...
Bad command or file name. Bad, bad command! Sit! Stay! Staaaay...
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
That may be. There are certainly hopeless situations. Still, no telemetry in the world is going to tell us why the AI did or did not do something. Would you like to have to make any guarantees for the behavior of your contraption? They don't have to become Terminators to be dangerous.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: Would you like to have to make any guarantees for the behavior of your contraption?
Do you know what the "emergency brake" used to be for? Do you why it is now the "parking brake" instead?
Do you know what anti-lock brakes are for? Do you know why they are safer, for most people, than versus the alternative, for most people, in the past?
What about when cars will not stop? This happens apparently more than I thought because I found the following looking for the other example that I know exists.
Driver was unable to stop or slow down his car[^]
So perhaps you don't drive at all, but everyone else already relies on the behavior of their "contraption".
|
|
|
|
|
OriginalGriff wrote: nothing could have prevented the collision, human or robotic driver
Yeah well I would dispute that, we've all been in that situation driving along where nobody is in front of you but they are near enough that you keep your eyes open - people walking close to the edge of the road, kids playing football in front of their house, dog walkers with the dog jumping about ...
If this woman "walked out in front of it so close than nothing could have prevented the collision" seems likely she was already close to the edge of the road, most humans would (1) gently nudge the car away from that lane/road edge before reaching (I'm sure in Az the lanes are wide enough), and (2) pay extra attention to watch for change of direction.
There's more to driving then what does happen, but being ready for what else can happen - yes some things are completely unexpected but where you can anticipate these possibilities you can and should be prepared. You see a drunk on the road do you pass within inches or wait till a nice big gap appears...
|
|
|
|
|
You are describing a good human driver, what about the many deaths daily caused by detracted, dangerous drivers.
By the time this technology makes it to the mainstream, all the bugs will be sorted out and the roads will be a far safer place, current dangerous and careless driving offenses will no longer exist.
|
|
|
|
|
KennethKennedy wrote: By the time this technology makes it to the mainstream, all the bugs will be sorted Really? How will they do that? How do you unit test the AI? How do you prove that your AI can deal with any circumstances a very complex world throws at it?
Look at how miserably we fail at testing normal code made up of simple, limited functions. From where do you take the optimism that this will miraculously work for something as complex as an AI?
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: Really? How will they do that? How do you unit test the AI? How do you prove that your AI can deal with any circumstances a very complex world throws at it?
How do we do it with human drivers, my friend? We don't, we train him or her ... and pray.
|
|
|
|
|
Tomaž Štih wrote: We don't, we train him or her ... and pray. That's not true. At least around here they make sure that you are equipped with the abilities of a few hundred million years of evolution before they even let you near a car with a driving instructor. Sure beats training a thing that has no idea what it is doing - or why it is supposed to do it. Did they really have to teach you how to look around and make sense of what you see?
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
That didn't help us with chess and go. In a battle between engineering and the evolution -- in short term my bet is on evolution, and in long term on engineering.
Resistance is futile, robots will assimilate you AND your cat.
|
|
|
|
|
Evolution doesn't have anything to do with the ability of braking in time when a pedestrian jumps in your way in unexpected places while you're controlling a 1500kg mobile object at 38 mph. Or pretty much any other situation that we have to deal with when controlling a car. If anything, the instincts that evolution got us will make us behave inappropriately.
If anything, most of evolution taught us that it's best to run over any pedestrian who's stupid enough to run into our path - one less competitor on our hunt for food! In that respect, most autonomous systems are already better than that before they even start training!
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
Stefan_Lang wrote: Evolution doesn't have anything to do with the ability of braking in time when a pedestrian jumps in your way Indeed? So you needed someone to teach you how to detect the pedestrian jumping your way? You did not have a naturally evolved image processing system (among other things) in that grey matter between your ears? And a neural net that is by orders of magnitude smaller and with only a tiny fraction of the training time (no matter how you measure it) will do the job better?
I wish I could share your optimism.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I did also say at 38 mph. Typically a human moving at 38 mph through pretty much all of evolution was only seeing one thing, and that is the ground he was about to hit - not the kind of stuff going into the genes except into the genes of the onlookers. If evolution taught us anything it is that moving at 38 mph is fatal.
Now, of course, if your forefathers were running through the jungle they certainly did learn to react to a creature moving into their path. But, depending on the number of claws and teeth (or raised clubs) of that creature, stopping might not have been the preferred type of reaction.
I'm not saying that this is not an important bit of information when deciding that you need to slow down when something moves into your path, but it's also so much different from the evolutionary training, that the lesson learned can be pretty much reduced to saying that: if something moves into your path, slow down. And that is trivial to learn for any autonomous system, no matter how small.
In case of the accident, this raises the question why the cars sensors did not detect the woman, or identify it as an actual obstacle. Apparently the driver didn't either, or at least not in time, and his millions of years of evolution didn't help him in any way there. But the car's systems should have been able to both detect the woman (using the LiDAR sensors) and react to it as well (thanks to super-human reaction times). The investigation should focus on these questions.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|
Stefan_Lang wrote: I did also say at 38 mph. Typically a human moving at 38 mph through pretty much all of evolution was only seeing one thing, and that is the ground he was about to hit - not the kind of stuff going into the genes except into the genes of the onlookers. If evolution taught us anything it is that moving at 38 mph is fatal.
Now, of course, if your forefathers were running through the jungle they certainly did learn to react to a creature moving into their path. But, depending on the number of claws and teeth (or raised clubs) of that creature, stopping might not have been the preferred type of reaction.
I would say your well thought out logic is interfering with an ill-thought out rant.
|
|
|
|
|
That's exactly the reason why I am going to stick to driving my car myself rather than handling it over to AI. If a fool jumps in front of my car suddenly, I am going to run over the guy. I don't want the AI to brake hard and send my head to the steerwheel.
|
|
|
|
|
Home sapiens have only been around 200k years. We have only been using faster than human modes of transportation, starting with horses(?), for around 6k years. Regardless of all that, with a human, you are still counting on that person's physical limitations (age, reaction time, visual acuity, etc), their attention span, and the skills they have acquired to be a good driver. With an AI you have (hopefully) a system that pays attention 100% of the time, can aggregate and build upon past experiences of multiple individual systems, and can actually have sensors that surpass what humans can see. Look at it this way, think of the quality of cars before robots were used in mainstream production. The tolerances were able to be tightened and quality has improved by using them. Over time I would think we would get to a point where cars could talk to each other and even help avoid accidents all together.
|
|
|
|
|
That's very nice, but falls short of the mark.
milo-xml wrote: ou are still counting on that person's physical limitations (age, reaction time, visual acuity, etc), their attention span, and the skills they have acquired to be a good driver Quite so. Since when is any AI capable of forseeing future events by using experience? So far, only we have been able to do that, not even the closest relatives.
Here we have stretches of highways without any speed limit. I really enjoy a ride at the maximum speed my car is capable of, usually while having a good eye on what happens on the lanes to the right. Most poeople see you coming and wait until you have passed, but there is always a 'Kamikaze' who pulls out right in front of your nose at a fraction of your current speed. A AI would not react to them until they actually pull out, but then it may already be to late. How do I notice them ahead of time? I don't know. It must be something in the way they behave prior to changing lanes, but I notice them and hit the brakes before they actually do it.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: How do I notice them ahead of time?
I would suspect that you see the person looking at the lane to see if there's room before moving over.
Normal human reaction time is around a quarter of a second. I think most of the self driving cars are quite a bit less than that, although I don't have the numbers in front of me. Think of it this way though, if that other car had AI, it would see you and not pull out in front of you, or at least speed up before doing that. You're looking at it as me, try looking at it from a collective stand point and I think the advantages tip way to machine advantage.
|
|
|
|
|
|
Exactly what I mean. It can only react to a situation, but posesses no foresight.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: Really? How will they do that? How do you unit test the AI? How do you prove that your AI can deal with any circumstances a very complex world throws at it?
You must drive somewhere, some other country even, than the one I live in.
Nothing like, every single time, there is a major storm watching the videos of many cars crashed off the side of the road because people failed when driving in that. Not to mention multiple car pile ups where people were going to fast for conditions.
Then there are the accidents where someone hits the wrong pedal and ends up inside a building. Or actual clubs whose sole purpose is to race, actually race, down normal streets late at night. Hundreds of people show up at these meet ups.
Not to mention, drunks, high, medicated (prescribed by the way), falling asleep and a huge variety of other distractions.
I once was on the highway and looked over to see a car with no driver. Turned out the driver was completely prone reaching for something in the passenger seat.
CodeWraith wrote: Look at how miserably we fail at testing normal code made up of simple, limited functions.
However that proves the very point. You are claiming that human programmers are fallible. But so are human drivers. But the code IS tested. Are you claiming that every human driver is tested as extensively? Especially on an on-going basis?
|
|
|
|
|
No. All I am saying is that you are making a deal with the devil. The good part is that the devil likes to honor agreements to the letter, but usually in a way you are not going to like at all.
I have played enough with AI to tell you that exactly this is going to happen. It already happens in simple scenarios and complex real world scenarios just beg for this behavior. It's the very nature of any AI to explore the possibilities within the frame you have set with your directives.
I wish you good luck when someone wants to hold you accountable for the actions of your product and you have to explain everything to a judge.
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
CodeWraith wrote: No. All I am saying is that you are making a deal with the devil. The good part is that the devil likes to honor agreements to the letter, but usually in a way you are not going to like at all.
I doubt that. For example I would expect that a self-driving car would stop at a red light always. Now I always attempt to stop at red lights. Always. Very occasionally that is a bad decision because I end up sliding through the intersection on the ice. And that is something that I am very ill-equipped to deal with. I suspect a self driving car would be better able to do that.
CodeWraith wrote: I have played enough with AI to tell you that exactly this is going to happen
You mean versus my last three cars that were totaled by the illegal actions of other drivers? So the AI is not going to be obeying the traffic laws and would not be better capable of detecting and avoiding collisions?
CodeWraith wrote: and you have to explain everything to a judge.
Versus the multiple drivers whose cars have already unexpectedly accelerated or refused to stop?
Versus the drivers who are still driving with multiple DUI convictions? Versus the drivers whose licenses are suspended immediately by a judge and then who leave the court and get into their car and drive away?
|
|
|
|
|
I doubt that 'all' bugs will be sorted out, but that is not the point. At all. The point is that the system works better than most human drivers. Judging by the very few reports of autonomous vehicles involved in accidents, these systems have already surpassed that mark!
I'm sure if, today, all vehicles would be equipped with the latest autonomous systems, the number of accidents would be drastically reduced, and the main cause for accidents still happening would be pedestrians, bikers, and other road users that are not equipped with such a system for whatever reason, behaving in erratic ways.
The only good reason against such a stepp would be indications that autonomous systems can cause crashs among themselves - so far I am not aware of a single incident of that kind, but of course there are too few autonomous vehcles around for that to be a useful statement at this time.
GOTOs are a bit like wire coat hangers: they tend to breed in the darkness, such that where there once were few, eventually there are many, and the program's architecture collapses beneath them. (Fran Poretto)
|
|
|
|
|