Tesla’s next version of Full Self-Driving (FSD) has been widely discussed in recent weeks, and a new update from CEO Elon Musk over the weekend highlights the fact that it won’t prevent drivers from wearing sunglasses anymore.
The FSD Supervised system uses a driver monitoring feature that makes sure drivers remain attentive and awake, though the system won’t allow the driver to wear sunglasses with the system engaged without nags. In response to one X user complaining about not being able to wear sunglasses while using FSD on Saturday, Musk wrote that the issue would be fixed in v12.5, to which many users in the thread expressed appreciation.
It’s still not clear exactly when Tesla plans to start deploying FSD Supervised v12.5.
Musk originally said that FSD v12.5 would be out in late June, and many are especially waiting for the update as it’s expected to finally bring FSD Supervised to the Cybertruck. Despite missing the late June target for the release, Musk has highlighted a handful of the other improvements in the version, as well as noting on Thursday that the release was in fact ready to hit the Cybertruck upon its deployment.
He also said this month that FSD Supervised v12.5 will finally merge the city and highway software stacks, as was previously done with v11, though it was apparently rolled back at some point with the arrival of v12.
Tesla started rolling out FSD Supervised v12.4.3 to some customers earlier this month, after previous versions had been delayed due to an extremely low level of interventions—and after the company essentially halted the rollout of v12.4.2.
Musk highlighted the issue of low interventions earlier this month.
He also detailed the problem during Tesla’s Annual Shareholder Meeting last month, explaining that the fewer interventions there are, the more difficult it becomes to test versions and point versions against each other to see which ones are performing best.
“And then, like I was saying earlier, it actually gets, as the system gets better, it gets harder to figure out which AI model is better, because now you know, like, ‘Okay, it’s thousands of miles between interventions.’
“How do we, as quickly as possible, figure out which AI model is better. And when you make these different AI models, they’re obviously not like super deterministic, so we have a new model that eliminates one problem but creates another problem. So we’re trying to solve this by a combination of simulation, uploading models, having them run in Shadow Mode.
“It’s actually kind of helpful that not everyone has Full Self-Driving, because we can see, we can run it in Shadow Mode and see, ‘What would this new model have done compared to what the user did?’
“So since we’ve got, you know, millions of cars that we can do this with, that gives us a delta between what the AI model predicted would do and the user would do. And if you kind of sum up the errors between them, you can see ‘Oh, there was a bigger error stack from this model versus that model,’ when you uploaded them into, each uploaded them into 100,000 cars.
“But that’s the biggest limiter right now. It’s not training, it’s not data, it’s actually testing the AI models. And then figuring out clever ways to figure out if a new model is better or not. Like there were sort of particular intersections that are difficult.”