I saw another one recently — a hyperbolic headline that, if taken literally, would be sure to inspire panic: “Will Artificial Intelligence (AI) End Us?”
Undoubtedly, the publisher of that particular story wasn’t motivated to instill fear, but rather inspire clicks. But it’s time to ignore these sensationalized headlines, and to confine any “AI vs. human” storylines to science-fiction books and movies. In the real world, there is no “versus” in between AI and humans. It’s not a battle to save humanity, nor will AI obliterate the (human) workforce. It’s not even a competition (except for when we humans stage one, like the Deep Blue v. Kasparov chess match or the Watson-Jeopardy challenge).
Practically speaking, AI and humans are collaborators. And, like any great team, when that collaboration is healthy, we’re capable of doing great things. And when there’s dysfunction, bad things happen.
Naturally, we at Directly are biased, and I don’t mean algorithmically. Our support automation solution — built upon the collaboration of AI and human experts — swiftly and effectively provides customer service that humans (or AI) alone couldn’t deliver. As we’ve regularly written about here on our blog, we believe AI will only truly thrive when us humans train and nurture it. (It turns out, AI isn’t all that intelligent without out us.)
We’re not alone. Today’s most important thought leaders in AI agree: AI and humans complement each other. And only by working together — to borrow a term from PC pioneer Doug Engelbart — will our “Collective IQ” peak.
Here are a few of today’s most important thought leaders in our space — and their views of the AI-human relationship:
O’Reilly: ‘Human-machine symbiosis’
“I think of the human-machine symbiosis as a trend that is probably bigger than the internet, and bigger than open source.” — Tim O’Reilly
There’s probably no more prolific tech trend-spotter than Tim O’Reilly, the founder of tech publishing giant O’Reilly Media, who began writing computer manuals back in the early 1980s. He’s best known for his thought leadership on the Internet (“Web 2.0”) and open source — but in recent years, he’s written extensively about AI and it’s evolution.
His 2017 book, “WTF: What’s the Future and Why It’s Up to Us” looks deeply at the concept of algorithm bias, amidst the backdrop of rogue AI and machine learning manipulation. The path to AI success, he believes, is a stronger “human-machine symbiosis.”
“They talk about AI as separate from us, but all interesting machines are hybrids of human and machine,” O’Reilly recently told ZDNet. “I think of the human-machine symbiosis as a trend that is probably bigger than the internet, and bigger than open source, and of which AI is one manifestation.”
O’Reilly is also outspoken about AI’s potential to increase the global economy’s overall job market — above and beyond jobs eliminated through automation.
“It’s important for us to realize that technology is not just about efficiency,” O’Reilly told Wired. “It’s about taking these new capabilities that we have and doing more with them. When you do that, you actually increase employment.”
Marcus: The ‘Hybrid’ artificial intelligence system
“It’s hard to see how we could build a robot that functions well in the world without analogous knowledge (of humans) there from the start.” — Gary Marcus via MIT Tech Review
Gary Marcus, a professor of psychology and neural science at NYU and founder of Robust.AI, recently published a book, Rebooting AI, regarded by many in the industry as a pragmatic view of AI’s current-day capabilities and limitations. Marcus has been referred to as a skeptic of current AI practices, but an optimist about its future.
In particular, he calls for a hybrid approach to building AI intelligence — a combination of machine learning techniques along with a human training element.
“It’s hard to see how we could build a robot that functions well in the world without analogous knowledge there from the start, as opposed to starting with a blank slate and learning through enormous, massive experience,” he recently told MIT Tech Review. “For humans, our innate knowledge comes from our genomes that have evolved over time. For AI systems, they have to come a different way. Some of that can come from rules about how we build our algorithms. Some of it can come from rules about how we build the data structures that those algorithms manipulate. And then some of it might come from knowledge that we just directly teach the machines.”
Ritter: Humans are AI’s ‘mutation engine’
“Humans are the only mutation engine in the age of AI. We are the core of a perpetual algorithm that’s more dynamic than software will ever be.”
Just after Gordon Ritter founded his venture capital firm Emergence in 2002, he bet big on a then upstart cloud CRM company, Salesforce.com. He’d also spent time at the home of Watson, running IBM’s Global Small Business Division.
Recently, Ritter and his team have been watching — and writing about — the evolution of the AI industry. He and fellow partner Jake Saper penned a recent post about the concept of AI-powered “Coaching Networks,” a collaboration in which AI augments the work of humans rather than replacing jobs. “The AI future you’ve been hearing about is wrong,” they wrote. “It’s cynicism, naivety, and fear-mongering wrapped into one.”
In particular, we like Ritter’s perspective on the role of humans as a “mutation engine.” In a separate post, he illustrates that idea of how an algorithm can learn best practices from a human network.
The AI “gets better over time by learning the best practices that are proven effective across a variety of situations, identifying those outlier cases where a creative person finds a new, better solution, and adds those techniques to its coaching,” Ritter writes. “This allows others to learn from the experience of those more creative workers. This is how humans become the ‘mutation engine’ in this evolving process, generating new ideas which in turn benefit everyone else.”
Accenture: Humans have ‘3 crucial roles’ in creating effective AI
“What comes naturally to people (making a joke, for example) can be tricky for machines, and what’s straightforward for machines (analyzing gigabytes of data) remains virtually impossible for humans. Business requires both kinds of capabilities.” — H. James Wilson and Paul R. Daugherty, Harvard Business Review
Accenture’s H. James Wilson and Paul R. Daugherty published research last year (via Harvard Business Review) about the state of applied AI in business after studying 1,500 companies. The top line conclusion supported what many of today’s thought leaders believe about AI-human collaboration: “We found that firms achieve the most significant performance improvements when humans and machines work together.”
In particular, the study suggested, companies working to deploy AI as part of their business processes need humans to perform three roles. “They must train machines to perform certain tasks; explain the outcomes of those tasks, especially when the results are counterintuitive or controversial; and sustain the responsible use of machines.”
The authors also noted that many of the companies studied had not yet “begun to reimagine their business processes to optimize collaborative intelligence. But the lesson is clear: Organizations that use machines merely to displace workers through automation will miss the full potential of AI.”
What does AI and human collaboration look like in the contact center?
At Directly, we’re hard at work helping businesses improve customer service through expert-in-the-loop AI. Want to see what that looks like? Set up a demo today.