

Based on? Have you seen the progress in users YouTube videos?
It’s not there yet but I don’t see how it can’t work with vision only. It just has to be safer than human drivers.
Based on? Have you seen the progress in users YouTube videos?
It’s not there yet but I don’t see how it can’t work with vision only. It just has to be safer than human drivers.
FSD hard and software has just to be good enough for the job.
And still we cause so many deaths because we are tired, distracted or emotional when driving.
Autopilot is not FSD and I bet many of the deaths were caused by inattentive drivers.
Which other system has a similar architecture and similar potential?
Human drivers use vision only
Which are the unsolvable problems?
Which other system can drive autonomous in potentially any environment without relying on map data?
If merging data from different sensors increases complexity by factor 5, it’s just not worth it.
As soon as we have hard data from real world use and FSD is safer than the average human, it would be unethical to not solve the regulatory and legal issues and apply it on a larger scale to save human lives.
If a human driver causes a crash, the insurance pays. Why shouldn’t they if a computer caused the crash, which drives safer overall, if only by let’s say 10%.
FSD is still in development
Should be fine if the car reduces speed to account for the conditions. Just like a human driver does.
What makes you assume that a vision based system performs worse than the average human? Or that it can’t be 20 times safer?
I think the main reason to go vision-only is the software complexity of merging mixed sensor data. Radar or Lidar alone also have their limitations.
I wish it was a different company or that Musk would sell Tesla. But I think they are the closest to reaching full autonomy. Let’s see how it goes when FSD launches this year.
I am not a fan of Tesla/Elon but are you sure that no human driver would fall for this?
Whataboutism is: “You said x but what about y?” Which doesn’t make x any less valid or problematic.
The assumption that ML lacks reasoning is outdated. While it doesn’t “think” like a human, it learns from more scenarios than any human ever could. A vision-based system can, in principle, surpass human performance, as it has in other domains (e.g., AlphaGo, GPT, computer vision in medical imaging).
The real question isn’t whether vision-based ML can replace humans—it’s when it will reach the level where it’s unequivocally safer.