• English
  • Deutsch
  • Français
  • Português
  • Español
  • Pусский
  • Italiano
  • Nederlands
  • 한국인
  • Svenska
  • $ USD
  • £ GBP
  • EUR
  • $ CAD

No relevant currency found

    Added to your cart
    Cart subtotal
    / /

    Tesla FSD Update

    Jul 25,2022 | Chloe Lacour

    In the FSD Beta 10.11 version, Musk personally confirmed that there are architectural improvements, which is a major update.

    Moreover, Musk also said that because the new 10.11 capabilities are powerful enough, more FSD test places will be considered in the future.

    From a larger perspective, the most rapid progress, the representation of pure visual routes, and the most complete data system construction... The extent to which Tesla's AI can achieve is a weather vane that both consumers and the unmanned vehicle industry are closely watching.

    Let’s take a look at the specific progress of the pure visual autonomous driving technology No.1 represented by FSD.

    11 updates, why does Musk value this one the most?
    The 11 updates are:

    Upgraded modeling of lane geometry from dense raster ("point pack") to an autoregressive decoder.

    Improve the understanding of the right-of-way (road category, drivable area) for autonomous driving systems when maps are inaccurate or navigation fails. Modeling roads, especially at forks, now relies entirely on neural network predictions rather than map information.

    The accuracy of VRU (vulnerable traffic participant) detection is improved by 44.9%. Greatly reduces false alarms for motorcycles, scooters, wheelchairs, and pedestrians in conditions such as rainy days and mottled roads. This is achieved by increasing the data volume of the next-generation automatic recognizer, training previously frozen network parameters, and modifying the network loss function.

    Reduced VRU prediction velocity error by 63.6% for very close distances. The measure is the introduction of a new simulated adversarial high-speed VRU dataset. This update improves autopilot control of fast-moving and cut-in VRUs.

    The vehicle's climbing attitude is improved, and the acceleration or braking force is stronger at the beginning and end of the climb.

    The static obstacle perception network has been improved to improve the perception and recognition of obstacles around the vehicle.

    By increasing the dataset size by 14%, the recognition error rate for vehicle "parked" attributes (referring to other vehicles on the road) was reduced by 17%, and the accuracy of brake lights was also improved.

    Adjusting the loss function to improve the autonomous driving ability of the vehicle in difficult scenarios. The speed error in the "passable state" is improved by 5%, and the speed error in high-speed conditions is improved by 10%.

    Improved detection and control of doors opened by roadside vehicles.

    Optimized the body control algorithm when the vehicle has lateral and longitudinal acceleration at the same time, and when the vehicle is bumpy, resulting in a smoother turning experience.

    The Ethernet data transmission optimization has improved the stability of FSD Ul visualization.

    All the updates can be roughly divided into two categories. The first category is the improvement of the passenger experience.

    For example, cornering stability, UI visualization, and climbing attitude are three items.

    The second type is closely related to the perception and decision-making of the vehicle itself, such as using neural network prediction to model intersections, reducing the dependence on high-precision maps.

    In addition, the "ghost brake" problem that users frequently reported at the end of last year was caused by the inaccurate identification of the camera in the presence of interference.

    This update improves the recognition accuracy of different targets and states by adding recognizers to the back-end AI neural network, enriching data sets, and adjusting loss functions.

    The progress of these capabilities, whether it is perceptual decision-making or modeling, is inseparable from the most basic lane and target prediction capabilities.

    This is also the “architecture-level” update that Musk values ​​most in this update:

    The modeling of the lane geometry was upgraded from a dense raster to an autoregressive decoder.

    What's the meaning?

    Lane grating is a technology often used in toll stations ETC. It extracts contour features through the shading of light by objects and is used to distinguish different vehicles.

    The dense raster modeling Tesla mentioned here is also to extract a large number of feature points in the image data through a virtual dense raster to restore and reconstruct the digital model.

    While the autoregressive decoder uses the Transformer to directly predict and connect the vector space lanes point by point.

    The so-called "vector space" lane refers to decomposing the overall situation of the lane into several key parameter information, such as width, material, color, lane line type, and so on.

    Why do you want to do this? Because human beings can understand the basic situation the moment they see the scene.

    But for neural networks, it is necessary to extract key parameters that can be "understood". If each key parameter is regarded as a vector, then all the information contained in a lane is a multi-dimensional space.

    With this concept, the update of the architecture level in Musk's mouth can actually be simply understood as the modeling process omits the intermediate step of extracting a large number of points from the image, and directly generates parameter information that can be understood by AI.

    The advantage of this is that the system's prediction of the road, other target behavior, and back-end fusion of multiple sensor information is more efficient.

    The intermediate steps are reduced, and the computing power cost and error rate are naturally reduced accordingly.

    How do you rate this update?

    Essentially, previous modeling methods rely on extracting features from image data serving the "human eye", and embody the human-first idea.

    And the ultimate goal of autonomous driving is to let AI replace humans. Wouldn’t it be superfluous to extract parameters from information data serving humans?

    What's more, the computational cost and the error rate will increase.

    Therefore, this update reflects Musk's deep understanding of the nature of AI and implements the "first principles" that he has been emphasizing before.

    Cold and rational, Musk has gone further and further on the road of abandoning the inherent thinking of human beings.

    For the Tesla FSD product, the "machine" is more thorough. Maybe it is currently inferior to some friends in terms of MPI and stability, but in terms of the underlying structure, it is already a purer AI.

    More AI Tesla, what does it mean?

    Musk believes that a more AI and more essential FSD represents a stronger autonomous driving capability.

    FSD 10.11 is currently being pushed to Tesla's internal staff, but Musk said that if it "performs" well, it will be pushed to a wider range of user tests.

    This "performance" includes two aspects, both the ability of the algorithm and the reliability of the car owner.

    That's right, to experience Tesla FSD, you must be a "superior student".

    Tesla scores every user who applies for the FSD test to determine whether he is a qualified and responsible driver.

    The dimension includes five aspects: forward collision warning every 1,000 miles, emergency braking, sharp turns, unsafe following, and forcibly disabling Autopilot.

    That said, distracted driving, aggressiveness, and distrust of autonomous driving all contribute to low scores.

    At present, only 98 points or above are eligible to test for FSD, and the total number of people is only about 60,000.

    What Musk said about scaling up is also lowering the threshold for driver admission scores to 95 points.

    Related Articles

    What are the Tesla AP, EAP, and FSD functions?

    Tesla Kids Car


    Back to NEWS.