After decades of promise and hype, artificial intelligence has finally reached a tipping point of market acceptance. AI is seemingly everywhere. Every day we can read about the latest AI advances and applications from startups and large companies. AI was the star of the 2018 Consumer Electronic Show earlier this year in Las Vegas.
But, despite its market acceptance, a recent McKinsey report found that AI adoption is still at an early, experimental stage, especially outside the tech sector. Based on a survey of over 3,000 AI-aware C-level executives across 10 countries and 14 sectors, the report found that 20 percent of respondents had adopted AI at scale in a core part of their business, 40 percent were partial adopters or experimenters, while another 40 percent were still waiting to take their first steps.
The report adds that the gap between the early AI adopters and everyone else is growing. While many companies have yet to be convinced of AI’s benefits, leading edge firms are charging ahead. Companies need to start experimenting with AI and get on the learning curve, or they risk falling further behind.
AI will likely become the most important technology of our era as it’s improved upon over time, but we’re still in the early stages of deployment. It’s only been in the last few years that complementary innovations, especially machine learning have taken AI from the lab to early marketplace adopters. And, history shows that even after technologies start crossing over into mainstream markets, it takes considerable time, - often decades, - for the new technologies and business models to be widely embraced by companies and industries across the economy.
Advanced Process Automation. Not surprisingly, the majority of the projects studied, 71%, fell into this category. It’s the least expensive and easiest cognitive capability for companies to implement, since they’ve long been engaged with the automation of business processes. It’s the best way to get on the AI learning curve.
In the 1960s and 1970s, IT brought automation to a number of discrete business processes, including transaction processing, financial planning, engineering design, inventory management, payroll and personnel records. Then in the 1990s, the connectivity and universal reach of the Internet enabled companies to integrate and better coordinate all their various processes, as well as to go beyond the boundaries of the enterprise and develop global supply chains and distribution channels and a large variety of online customer services.
A new era of smart connected processes is now emerging. The world’s digital and physical infrastructures are essentially converging. Datafication, - the ability to capture as data many aspects of business and society that have never been quantified before, - is now becoming an integral part of just about every product, service and system. Just about every process can become digital aware, networked and smart.
“Everything that we formerly electrified we will now cognitize,” wrote Kevin Kelly in a 2014 Wired article on the future of AI. “There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.”
Cognitive Insight. Cognitive insight projects take AI to the next level, using machine learning and other advanced algorithms to detect patterns in vast volumes of data. 38% of the projects in the study fall into this category.
Machine learning and related advances like deep learning, have played a major role in AI’s recent achievements. At its essence, machine learning is a radically different approach to programming. For the past 50 years, programming has been based on explicit knowledge, the kind of information and procedures which can be readily explained to people and captured in software. Tacit knowledge, - a concept first introduced in the 1950s by scientist and philosopher Michael Polanyi, - is the kind of knowledge we’re often not aware we have, and is therefore difficult to transfer to another person, let alone to a machine via software.
“We can know more than we can tell,” noted Polanyi in what’s become known as Polanyi’s paradox. This seeming paradox succinctly captures the fact that we tacitly know a lot about the way the world works, yet aren’t able to explicitly describe this knowledge. Tacit knowledge is best transmitted through personal interactions and practical experiences. Everyday examples include speaking a language, riding a bike, driving a car, and easily recognizing many different objects, animals and people.
Machine learning gets around Polanyi’s Paradox by giving computers the ability to learn by analyzing and finding patterns in large amounts of data, instead of being explicitly programmed. It’s led to the development of AI algorithms that are first trained with lots and lots of sample inputs, and then subsequently applied to complex problems like language translation, natural language processing, real time fraud detection, personalized marketing and advertising, and so on.
“Cognitive insights provided by machine learning differ from those available from traditional analytics in three ways: They are usually much more data-intensive and detailed, the models typically are trained on some part of the data set, and the models get better - that is, their ability to use new data to make predictions or put things into categories improves over time.”
Cognitive Engagement. These were the least common type of projects in the study, accounting for 16% of the total. Cognitive engagement projects are based on machine learning, intelligent agents, natural language chatbots and other leading edge AI capabilities, some still in the research stage.
Examples include the use of natural language chatbots to address a broad array of customer service issues with almost no human involvement; robo-advisors that provide personalized financial advice with minimal human intervention; and customized health treatment recommendations, especially for hard-to-treat medical problems.
Cognitive engagement capabilities are still quite immature, so companies are very careful in using them for customer-facing applications, preferring to first try them out in interactions with employees.
The paper includes a four-step framework to help companies develop and implement their cognitive strategies:
- Understanding the Technologies. Before embarking on an AI initiative, it’s important to first understand the strength and limitations of the available technologies. This requires employees with the proper skills, including data analysis and cognitive algorithms. Given the scarcities of such talent, most companies should establish a central pool of resources and make their expertise available throughout the organization.
- Creating a Portfolio of Projects. Companies should develop a prioritized portfolio of projects based on their needs and capabilities. This includes determining which areas of the business would benefit most from cognitive projects, which uses cases would generate the most value, and whether the available AI tools and skills are up to the task.
- Launching Pilots. Given the experimental nature of most cognitive applications, companies should create pilot projects with a limited scope before rolling them out across the entire enterprise.
- Scaling Up. Scaling up a cognitive application will generally require integration with existing systems and processes, as well as close collaboration between technology experts and the owners of the business process being automated. Integrating AI into the rest of the business is often the greatest challenge in AI initiatives.
“Our survey and interviews suggest that managers experienced with cognitive technology are bullish on its prospects,” wrote Davenport and Ronanki in conclusion. “Although the early successes are relatively modest, we anticipate that these technologies will eventually transform work. We believe that companies that are adopting AI in moderation now - and have aggressive implementation plans for the future - will find themselves as well positioned to reap benefits as those that embraced analytics early on.”
Comments