ajdelange
Well-known member
- First Name
- A. J.
- Joined
- Dec 8, 2019
- Messages
- 2,173
- Reaction score
- 2,283
- Location
- Virginia/Quebec
- Vehicles
- Tesla X LR+, Lexus SUV, Toyota SR5, Toyota Landcruiser
- Occupation
- EE (Retired)
No, not weird at all. Even though it is called a test drive you aren't really testing anything - you are being given a demonstration. Real testing requires operating the car for thousands and thousands of hours under a variety of carefully controlled conditions.
Suppose I asked you to flip a coin 10 times in a row and repeat the experiment 20 times then tell me how many times you got 10 heads in a row. You'd probably tell me not to waste your time because there is no way you can get 10 heads is a row. But there is. The probability that you will on a particular run is 1//1024, a small but finite number. The probability of getting 10 heads one or more times in 20 experiments is actually 1.9%. If you repeated the experiment 10,000 times you would get 10 in a row approximately 10 times. So it is with the autopilot. You drove it a few miles. If the probability of the autopilot doing something alarming in each mile driven is 1/1024 the probability that you will have 1 or more alarming events is 0.97%. But if you drive 10000 miles it is 99.994% i.e. almost certain. In a 500 mile trip it would be 38.6%. This is difficult to explain to the layman so no more mention of probabilities. Were I you I would just consider that there are a fair number of people out there driving Teslas in real world conditions who are reporting alarming actions on the part of the autopilot and conclude that at least some of these are not making this up to advance FUD.
I will also note that Tesla does not map the region (that's Waymo's approach). Telsa collects data from the cameras and uses that to train the neural network. Tesla knows where it is, of course, through the GPS receiver. It could use that knowledge. For example everyone knows that the first rule for driving in Boston is "Never make eye contact". I wonder how you program that into an autopilot.
Suppose I asked you to flip a coin 10 times in a row and repeat the experiment 20 times then tell me how many times you got 10 heads in a row. You'd probably tell me not to waste your time because there is no way you can get 10 heads is a row. But there is. The probability that you will on a particular run is 1//1024, a small but finite number. The probability of getting 10 heads one or more times in 20 experiments is actually 1.9%. If you repeated the experiment 10,000 times you would get 10 in a row approximately 10 times. So it is with the autopilot. You drove it a few miles. If the probability of the autopilot doing something alarming in each mile driven is 1/1024 the probability that you will have 1 or more alarming events is 0.97%. But if you drive 10000 miles it is 99.994% i.e. almost certain. In a 500 mile trip it would be 38.6%. This is difficult to explain to the layman so no more mention of probabilities. Were I you I would just consider that there are a fair number of people out there driving Teslas in real world conditions who are reporting alarming actions on the part of the autopilot and conclude that at least some of these are not making this up to advance FUD.
I will also note that Tesla does not map the region (that's Waymo's approach). Telsa collects data from the cameras and uses that to train the neural network. Tesla knows where it is, of course, through the GPS receiver. It could use that knowledge. For example everyone knows that the first rule for driving in Boston is "Never make eye contact". I wonder how you program that into an autopilot.
Last edited: