Here’s another question – what are some of the roadblocks for companies trying to innovate with AI?
We hear a lot about the possibilities and opportunities, but sometimes it’s instructive to think about the challenges, too – what do we see when we look at the bumps in the road for ever-increasing adoption of AI/ML systems?
First of all, Alex “Sandy” Pentland is somebody with a lot of bonafides in the sector. As he points out, he and I have had a lot of fun at Davos, and he has the experience of working with Geoff Hinton and others on nascent AI systems, before skipping off elsewhere, as he mentions at the beginning of his talk, and eventually getting involved in venture capital and everything else…
Side note – it’s funny to hear Pentland talking about “how to make $1 billion a day 30 years from now,” and it’s an interesting example of VC planning at work.
Anyway, Pentland talks about large language models as a way to “bring together all of the correlational structure in a large database.” That’s a good way of describing it, I think, and promotes the core ideas that he mentions later.
“It’s all the stupid things we say, all the weird communities,” he says of the idiosyncrasies that are ultimately going to pop up in LLMs, because, at the end of the day, they’re using our input. “Don’t expect it to be right all the time – because we’re not right all the time!”
He describes AI, citing the words of the late Marvin Minsky, as a “common sense,” a basis of shared knowledge.
Merging things in a natural way, he suggests, brings forth a new interface that’s going to be central to our AI applications.
In the race to profit from AI, he says, companies that innovate processes are going to win out. He also believes that the best model for AI is assistive, with a human in the loop (HITL) element.
Listen to the part of the video where Pentland talks about consequences of rampant AI production of results that’s untethered from human oversight.
“You have to be ready for a lawsuit,” he says.
This is an interesting bit, too, where he says the regulators are “coming for” the AI companies partly based on new threats to white-collar jobs.
Anyway, he makes an interesting analogy: from bookkeeping to AI effects on different demographics, suggesting that companies will have to keep very detailed records of what they’re doing with AI.
“You’re opening up liability to all of the damage that you really don’t want,” he says.
Where large language models are easy to load and use, he says, the big question is where the data comes from.
In a sense, he points out, it’s crowdsourcing, but how does this work? What about proprietary data?
He also makes this useful observation that in some ways, sharing information is the most valuable part of being human, but the big problem is who owns the data and how it is transferred – how much people have to give up in terms of privacy, in order to share with the people they want to reach.
Near the end, he talks about MIT Community Transformers, and the goal of thinking through AI development.
We’re thinking about: how do you bring this technology to support communities? Individuals in communities? Companies, too,” he says. “It’s really all about the data.”
So to recap – it seems to me there are two major things that are instructive here. The first is that you have to have assistive AI, rather than just letting these things loose on the world with no thought to what effects they’ll produce. The second is that companies have to specialize applications, not just “sell AI stuff” and think they’re going to be the next mega firms.
You might say that a lot of it is going to be about customer-facing technology services.
Or you could say that a lot of it is going to be about helping professionals to do their jobs better.
In any case, we like this particular talk and think it’s one you should check out!
Video: In The AI Age, What Are You Giving – and What Are You Getting?