I think you can get a lot out of this video where we’re talking about the evolution of self-driving vehicles, and the kind of fulcrum that you hit, as Marc puts it, about 90% of the way in…
First, there’s the timeline that we’re looking at, a timeline of, realistically, about 20 years or so, where Marc is delineating the ways that we ended up using AI in the automotive realm.
In the beginning, he says, it was “the last thing on our minds,” – but artificial intelligence did end up being useful in some really concrete ways.
The first thing he mentions is figuring out battery charge, where the battery’s sort of a black box itself. As he points out, you know where the full is, and you know where the empty is, but when you’re in between, you’re kind of in a gray zone.
He also talks about Sebastian Thrun’s Ted talk, and the events around that, which we were both at, and where we contemplated what self-driving vehicles had been able to do. When Marc talks about the car “ripping around” – I was impressed, as well. But all demos aside, we still have work to do.
In Marc’s words, it’s “compelling but not solved.”
He talks a little about utilities like blind spot programs, departure warnings, and the ability of AI to read speed limits, but also about how AI can’t quite match our human intuition, and how hard it’s going to be to bridge that gap.
I particularly like his cautionary tale of the ‘wake-up gnome’ where we realize that technology is only going to be useful if it’s something that people want!
When he calls AI’s ability “not dialed in,” part of what he’s talking about, in my opinion, is that sort of tone-deaf implementation that will never really catch on in consumer technology – in this case, digital avatars trying to wake you up…
He points out that some more subtle technologies are actually desirable to the human driver that they’re assistive, not heavy-handed.
Here are a couple of other insights that I got from Marc’s talk toward the end. There’s a part where he’s talking about physical battery testing, and iterative approach, where you tighten the experimentation cycle, with goals, as he stated:
“We need to get to a place where we have better batteries, we have cheaper batteries, and we de-carbonize the future.”
But here’s an even more interesting one that he talks about in the context of Moore’s law – Marc points out that GPUs tend to appear when CPUs get, in terms of perception, slow – so there was that earlier generation of specialized chips, after which we went back, to some extent, to a more standard CPU. Now, new GPUs have developed, and of course, we’ve gotten into multicore and parallel processing and all the rest ,but that fluctuation back and forth is really fascinating.
You might also enjoy watching the end of the video, where Marc tries to prove his own humanity to the camera, (at my humble request.) We’re joking around, sort of, but at the end of the day, verification is going to become a lot more important, as AI becomes able to do new things. It’s worthwhile to start thinking about that now!
Prior to that piece of “show and tell,” Mark also brings up another instructive tale from the past – the dot.com boom. Since he’s a VC and kind of moves in this world, he knows about the cycles that occur. The way he explains it is that after the dot.com boom burst, people blamed the ‘Silicon Valley hype machine’ and suggested that the underlying technology was junk, or that it would never be an effective part of the market.
Just look at where we are now. The Internet is everything in so many different industries. The sea change that it created has been, frankly, incredible. We think it’s likely that AI is going to go the same way. Will there be detours along the road? Errant technologies that aren’t fine-tuned or dialed in or precisely relevant? Sure. But we’re also setting the stage for a future in which AI is going to be pervasive, whether or not it looks that way right now.