Uber's Self-Driving Disaster
In the case of self-driving cars, stupid rich people insist that autonomous vehicles can be made safe by forcing humans to modify their behavior.
By Cory Doctorow / pluralistic.net
In his magisterial, long running series of papers explaining why Uber is not, and never will be, a viable business, transportation analyst Hubert Horan calls the business a "bezzle."
Bezzle is John Kenneth Galbraith's term for "the magic interval when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost it."
Uber has had an extraordinarily robust bezzle, one that has allowed its backers - primarily the Saudi royals - to make out like bandits.
Much of that is down to the company's insistence that it can become profitable once self-driving cars are viable.
Which is great, except self-driving cars are, to a first approximation, bullshit.
That's how Uber spent $2.5B on a self-driving car R&D; program that has produced vehicles that can't drive HALF A MILE without a major problem.
In a leaked email from the manager of the self-driving car unit to CEO Dara Khosrowshahi, the manager writes "The car doesn’t drive well...[it] struggles with simple routes and simple maneuvers."
On the R&D; unit itself, the manager laments it "has simply failed to evolve and produce meaningful progress in so long that something has to be said before a disaster befalls us."
Self-driving cars epitomize how bezzles can run the breadth of the whole economy: billions of dollars have been spent by supposedly responsible, sober-sided investors, which is meant to prove that they are possible.
This is comparable to the belief that Facebook and Google's claims about their products' ability to manipulate us and deprive us of free well MUST be true, or blue-chip companies wouldn't spend so much on those products.
Or the belief that hedge-fund managers must be able to outperform simple index funds or rich people won't entrust them with their money.
Alternative hypothesis: being rich doesn't mean you're smart, it means you're lucky, and luck runs out eventually.
But being rich *does* make you powerful, and rich people who make bad bets often try to bend reality to make those bets play out: for example, rich people who make stupid investments buy new tax codes that give them giant tax-credits for the losses.
In the case of self-driving cars, stupid rich people insist that autonomous vehicles can be made safe by forcing humans to modify their behavior.
Think of Drive AI's Andrew Ng - late of Baidu - who says the cars will be safe as soon as we solve the "pogo stick problem."
What's the pogo stick problem? Here's Ng in The Verge:
"I think many AV teams could handle a pogo stick user in pedestrian crosswalk [but] bouncing on a pogo stick in the middle of a highway would be really dangerous. Rather than building AI to solve the pogo stick problem, we should partner with the government to ask people to be
lawful and considerate. Safety isn’t just about the quality of the AI technology."
Translation: the problem with self-driving cars is humans, not cars.
The solution: Ban human-like behavior in the presence of cars.
Now that self-driving car R&D; is entering the trough of despair, listen for a lot of plutes demanding that we make up their losses by changing our behavior to benefit their shareholders.
Lead image via Wikimedia Commons