AI is seemingly everywhere. In the past few years, the necessary ingredients have finally come together to propel AI beyond early adopters to a broader marketplace: powerful, inexpensive computer technologies; advanced algorithms; and huge amounts of data on almost any subject. Newspapers and magazines are full of articles about the latest advances in machine learning and related AI technologies.
Two recent reports concluded that, over the next few decades, AI will be the biggest commercial opportunity for companies and nations. AI advances have the potential to increase global GDP by up to 14% between now and 2030, the equivalent of an additional $14-15 trillion contribution to the world’s economy, and an annual average contribution to productivity growth of about 1.2 percent.
Over time, AI could become a transformative, general purpose technology like the steam engine, electricity, and the Internet. AI marketplace adoption will likely follow a typical S curve pattern, - that is, a relatively slow start in the early years, followed by a steep acceleration as the technology matures and firms learn how to best leverage AI for business value.
To get a better sense of the current state of AI adoption, McKinsey recently conducted a global online survey on the topic, garnering responses from over 2,000 participants across 10 industry sectors, 8 business functions and a wide range of regions and company sizes. The survey asked about their progress in deploying nine major AI capabilities, including machine learning, computer vision, natural language text and speech processing, and robotic process automation.
Overall, the business world is beginning to adopt AI. 30% of organizations are conducting AI pilots. Nearly half, 47%, have embedded at least one AI capability in their standard business processes, compared to 20% in 2017. AI opportunities can be found across the firm, but only 21% report using AI across multiple business functions. AI investments are still quite small, - 58% of respondents said that less that one-tenth of their digital budgets goes toward AI, while 71% expect that AI investments will increase significantly in the coming years.
Many respondents said that their organizations lack the necessary skills and practices to create value from AI at scale, including identifying key strategic opportunities and obtaining the data required by AI applications. Most felt that AI will have a relatively minor impact on their overall future employment, despite the fact that AI will likely automate a significant fraction of existing work,
Telecom, high-tech and financial services firms lead the way in AI adoption. In telecom, 75% of respondents said that they use AI in their service operations, 45% said that they do so in product development and 38% in marketing and sales. In high-tech, 59% use AI in product development, 48% in service operations and 34% in marketing and sales. In financial services, 49% use AI in their service operations, 40% in risk management and 33% in marketing and sales. Other sectors use AI primarily in one business function, such as automative and assembly, where 49% are using AI in manufacturing; retail, where 52% use it in marketing and sales; and travel, transport and logistics, where 51% use AI in service operations.
While still in its early days, AI is already delivering meaningful value for those who’ve embraced the technology. 78% report receiving significant or moderate value, while only 1% say that they’ve seen none or negative value. Across business functions, value was highest in manufacturing and risk management, where 80% report receiving significant or moderate value, followed by supply chain management and product and service development at 76%.
The survey asked about 11 core practices that would enable organizations to better realize the potential value of AI at scale. Results confirm that most organizations have a long way to go. Only 27% said that they have access to the internal and external talent necessary to support AI work; 26% said that their senior leaders demonstrate commitment to AI initiatives; 18% said that their company has a strategy in place for accessing and acquiring the necessary data for AI work; 17% said that their companies have mapped out the potential AI opportunities across the organization; and 15% said that their company has the right technological infrastructure in place to support AI systems. Almost a quarter of respondents, 24%, said that their companies have not developed any of the 11 practices the survey asked about.
When asked about the most significant barriers their organizations face in adopting AI, 43% cited the lack of a clear strategy, while 44% cited the lack of appropriate skills. 30% said that functional silos constrain the use of AI solutions, and 27% mentioned that their leaders lack the necessary commitment to AI
A critical success factor for AI is a company’s progress along its digitization journey. The same players who’ve been leaders in earlier waves of digitization are now leading the AI wave. 67% of respondents from the most digitized firms say that their organizations have embedded AI into standard business processes, compared with 43% for all other companies. 39% of the most digitized companies have adopted machine learning capabilities and another 31% are conducting machine learning pilots, compared to 16% and 24% respectively for all companies. And 37% of the most digitized companies are already using virtual agents while another 31% are conducting pilots, compared to 15% and 26% respectively for all companies.
When it comes to talent and skills, respondents from the most digitized firms are more likely than others to build AI capabilities in-house, - 52% compared to 38% of all companies, - and to invest in retraining and upskilling current employees, - 42% versus 35%.
McKinsey recommends that companies follow three key steps to help them realize their AI potential:
Make progress on your digital journey. “The implications of continuing digitization are significant… without a strong digital backbone, a company’s AI systems will lack the training data necessary to build better models and the ability to transform superior AI insights into behavioral changes at scale.”
Scale AI’s impact across the enterprise. “Achieving results at scale requires not only the diffusion of [AI] capabilities across the enterprise but also a real understanding and commitment on the part of leaders to drive large-scale change, as well as a focus on change management rather than on technology alone.”
Put key enablers in place. “While the adoption of AI is happening fast, the survey suggests that organizations tend to lack many of the foundational enablers required to derive value from AI at scale. These enablers include top-management sponsorship, development of an enterprise-wide portfolio view of AI opportunities, action to close talent gaps, and the implementation of a sophisticated data strategy - all of which require more strategic thinking around AI programs and agendas. Business and technology leaders must work quickly to establish key AI enablers. Otherwise, they risk missing out on the current - and future - AI opportunity.”
Let's stop talking about "Artificial Intelligence'. To me, you should not be allowed to use that term until computers get to the phase where they can use deep learning technology (in it's infancy) with meaning associated with various icons and tokens in the world to integrate the other "smart" technologies we have out there that you mention: machine learning, computer vision, natural language text and speech processing, and robotic process automation. I would add "comparison technology" to the list. Also, and I agree with Peter Gruelich on this one, once you get into deep learning technologies, computers then act as if they have "values" which are basically choices that we make in our lives as humans based on the meanings associated with various words and images.
Computers are already doing a poor job of interpreting language. Before the concerns about Amazon Echo recording our conversations, we used Alexa to build our shopping list. It was probably our best use of it. Sadly, we had to add each item, one by one to the list. Peanut butter, jelly, and bread was not an understandable option. When we started getting "random laughs" from our Alexa, it was time to disconnect.
Who is the arbiter of the meaning of an image of an "old person", for example. If driven primarily by business interests, the computer could decide that this is an inefficient worthless human, who provides no value after their working years, and should be eliminated. An AI system built by Americans could revere an image of an American flag, while an AI system built by Iran could despise it. The act of burning said flag, could result in a decision in each country that the opposing country holds terrorists. We already have enough humans making those decisions.
I think the complexities are too great to trust "deep learning" AI computers with the job of assigning value and making decisions impactful to people's lives. Given that most of the funding is from the military branch of government, it gives me even more concern, as a robot might decide that killing innocents is acceptable in the acquisition and elimination of a target, for example.
As humans, we can't even agree on a common set of values on how we can humanely treat each other, so once we add computers we just accelerate the process. And eventually, the AI decides it doeesn't need us.
Posted by: GlenInTexas | February 04, 2019 at 02:30 PM
Humans take at least 15-20 years to "learn and assimilate context", then based on our personality, we either try to dominate, manipulate, or adapt to our environment. While I'm happy to think of computers which would "adapt" to us, I suspect the goals of the military and commercial interests will be to dominate and/or manipulate humans and their environment.
Over that 15-20 years, humans learn literally millions of contextual clues to living in the world and in society, from putting our hand over our heart to the Pledge of Allegiance, to standing when a bride walks down an aisle in a wedding, to pulling over to the side of the road when police or emergency vehicles approach. In our AI systems, how many are we missing? Like the Tesla that could not distinguish a gray truck from the sky, how much risk are we assuming with these systems? When they fail, will they simply shut down safely, or continue to attempt to perform their function out of control?
I prefer to speak in terms of "building experts" in domains like self-driving cars, or electronic commerce, or wedding planning, or law enforcement.
Posted by: GlenInTexas | February 04, 2019 at 02:38 PM
One final comment. The AI I was most completely impressed with in speech recognition recently was Fidelity Investments. I called their 1-800 number and the system prompted me to enter the 5 digit extension of the person I needed to talk to. It was already responding to my "answers" of yes and no, so I said "call Edmund in the Frisco office". The AI system found the correct office (which actually was in Plano, not Frisco, an adjacent suburb, and it rang the number of a colleague of Edmund, who was currently on the phone). The system was able to identify the office I was looking FROM GEOGRAPHICAL CONTEXT, and then was able to recognize that they could not connect me to the person, and route me to an appropriate substitute IN THE SAME OFFICE! THAT is the kind of smart technologyI want.
Posted by: GlenInTexas | February 04, 2019 at 02:44 PM