Here’s a slightly different take on the prospect of collaboration between humans and machines:
Last week, after listening to some of the presentations at a recent event, I was thinking about how long we’ve been trying to communicate with technology!
It didn’t just chart start with chatbots and sentient AI entities in the past year or so.
You could say that the very first time that a human being typed in a terminal, that was a conversation between a human and machine, too.
When you look at the intersection and evolution of coding and machine language, you see how we translate our human speech to bits and bytes of binary.
Fundamentally, you could say we have always been trying to “talk to” computers, and cooperate with them. AI just makes it a lot less abstract.
That, of course, leads to the ever-popular idea of the Turing test, which continues to be a great piece of for dinner-table conversation.
When you introduce the idea to someone who’s not in the tech world, you can almost see their brain working to try to imagine a computer or robot that can pose as a human.
the Turing test criteria, though, depends on the interface itself, but in the age of ChatGPT, we’ve come an enormous step closer to having AI that can represent itself as human across all interfaces, even when it’s sitting right next to you…
That in turn has led us to think about the ways that humans and machines are fundamentally different, and the ways in which they can be the same, or, alternately, the ways in which they (we) can collaborate.
We get some of this in Catherine Havasi’s talk, where she goes back through the ages looking at the efforts of humans to communicate with the precursors to AI, which you might call logical engines or computing systems.
She mentions search engines, as well as speech recognition engines like Siri and Alexa.
With all of these, she says, we learned to communicate with the tech in our own ways.
Citing a recent study, BB talks about ‘AI as a coworker’ and some experiments into humans trying to figure out the machine’s world.
“They exist in a world that’s very different than ours,” she says. “They can’t touch things, they can’t smell – but yet, they can bring really interesting things to the table – they can read more information than any of us will ever read in our lifetimes, and they can (avoid making mistakes) due to boredom and other sorts of things.”
To me, this kind of thinking relates to those scenarios that many of us think about when we try to imagine what AI entities face in the real world
Think about a sentient being confined to the Internet or some piece of hardware – unable to move physically in the world, but very much able to perceive and understand elements of it. Does that seem a little creepy?
Anyway, there is a clear consensus that a lot of AI is going to be assistive, which makes the idea of AI coworkers a pretty practical one.
As for the singularity when the human brain is destined to keep combining with the computer, well, we’re not quite there yet. The human brain is safe, for now.
But just the ability to build and communicate with sentient AI programs in profound ways is monumental in our technology journey. As we’re seeing, the use cases are nearly infinite, and enterprise is rushing to catch up. Soon we might be doing that in our personal lives, too, as more of this new technology comes to consumer markets.