I recently read The Human Strategy, a very interesting article on Human-AI decision systems by MIT Media Lab professor Sandy Pentland. Pentland is the faculty director of MIT Connection Science, - with which I’m associated as a Fellow, - as well as director of the Human Dynamics Lab. In 2014 he published Social Physics: How Social Networks Can Make Us Smarter.
“Perhaps the most critical function of any organization or society is its decision systems,” wrote Pentland. “In modern societies, decision systems provide a leader the ability to make informed and timely decisions, supported by a complex enterprise of distributed information and communication systems that provide situational awareness. Traditionally, decision systems have been confined to individual physical domains, such as logistics, physical plant, and human resources, and more recently virtual domains such as cyber, by both policy and technology, resulting in challenges with the integration of information across disparate domains.”
But, despite the increasingly complex decisions that organizations are called upon to make, decision-making remains human-intensive and anecdotal. Few organizations have applied social network analysis to help them scale the size and expertise of the decision-making group. Nor have they integrated the large amounts of data, analytical tools and powerful AI systems now at our disposal into their decision making systems.
When Moneyball first came out in 2003, many viewed it as a story about the conflict between the traditional approach of the scouts, - the professional talent evaluators who learn about the players first-hand by meeting them in person and watching them play, - versus the new approaches being introduced by the statheads, - who mostly rely on sophisticated statistical analysis to predict future performance.
But years later, - as Nate Silver explained in his 2012 bestseller The Signal and the Noise, - there was enough data to compare the performance of scouts versus more purely statistical approaches. The scouts’ predictions were about 15 percent better than those that relied on statistics alone. The good scouts, as it turns out, use a hybrid approach combining statistics with whatever else they learn about the players. Statistics alone cannot tell you everything you want to know about a player, and the additional personal evaluations of the scouts make a significant difference.
Similarly, writes Pentland, “certain kinds of human-AI combinations will perform better than humans and AI working alone. Although no person is better than a machine for many repetitive support tasks or focused tactical problems, no machine is better than a person with a machine for difficult tasks such as analysis and interpretation. Thus, by building AI systems that are compatible with human behavior, and specifically AI systems which leverage the manner in which humans use social information, we can build human-AI decision systems that extend human intelligence capabilities.”
Machine learning, and related advances like deep learning, have played a crucial role in AI’s recent achievements. Machine learning gives computers the ability to learn by ingesting and analyzing large amounts of data instead of being explicitly programmed. It’s enabled the construction of AI algorithms that can be trained with lots and lots of sample inputs, which are subsequently applied to difficult AI problems, including natural language processing, language translation and computer vision.
Machine learning grew out of decades old research on neural networks, a method for having machines learn from data that’s loosely modeled on the way a biological brain, - composed of large clusters of highly connected neurons, - learns to solve problems. Based on each person’s life experiences, the synaptic connections among pairs of neurons get stronger or weaker. Similarly, each artificial neural unit in a network is connected to many other such units, and the links can be statistically strengthened or weakened based on the data used to train the system. As new data is ingested, the system rewires itself based on whatever new patterns it now finds.
“It's a way of taking a random bunch of things that are all hooked together in a network and making them smart by giving them feedback about what works and what doesn’t,” notes Pentland in a related online conversation. However, “because those little neurons are stupid, the things that they learn don't generalize very well. If it sees something that it hasn't seen before, or if the world changes a little bit, it’s likely to make a horrible mistake. It has absolutely no sense of context.” A similar point was made in a recent WSJ article, Without Humans, AI is Still Pretty Stupid.
But, if instead of dumb, simple neurons you use neurons with embedded subject matter knowledge, such as humans, the overall AI network would get smarter. A human organization can be viewed as a kind of brain, with people as the individual neurons. For example, quality circles have long been used as a way of improving decision making by incorporating the real-world experiences of groups of front-line workers. And best practice methods, based on using all the research and experience at one’s disposal, have been proven to reliably lead to superior results.
Pentland suggests that you can apply neural network concepts to a network of people who’d use their knowledge about the world to make important business or societal decisions. If the connections between the people in such networks are fixed, based on a siloed, static org chart, the organization will have limited abilities to learn and adapt to a fast changing environment. On the other hand, in a dynamic organization, the connections between humans would reorganize themselves in response to shifting circumstances, reinforcing the ones that were helping contribute to a positive solution and discouraging those that were not, as is the case in machine learning, where sophisticated algorithms keep track of individual contributions to optimal group performance.
“The AI in such a smart organization would be used to create the best connections between people and ideas, not for replacing the neurons (people) within a static, frozen organization. Instead of people being trained to be simple rule-following machines (or replace them by AIs), people would be trained to engage in continuous improvements.”
“It turns out that high performing teams naturally behave in exactly the manner required for a successful human-AI organization.” Several years ago, Pentland, along with MIT professor Tom Malone, CMU professor Anita Woolley and other collaborators, conducted a study to ascertain the key attributes of high performance teams. In particular, they wanted to find out if groups, like individuals, exhibit characteristic levels of intelligence which can be measured and used to predict the group’s performance across a wide variety of cognitive tasks. To do so, they tried to measure a group’s intelligence using methodologies and statistical techniques similar to those that have been applied to individual intelligence for the past hundred years.
Each of the nearly 700 individuals participating in the study were first administered standard IQ tests, social sensitivity tests to evaluate their ability to read other people’s emotions, and other measures. They were then randomly assigned to groups of two and five members, and were given a variety of problems to solve, including visual puzzles, brainstorming, collective moral judgements and negotiating over limited resources.
The results were published in the October 2010 issue of Science. Neither the average intelligence of the individual group members nor the highest individual intelligence were strong predictors of the group’s overall performance. They also looked at group cohesion, motivation and satisfaction, but none of them worked either. Instead, the group’s with the best overall performance where those with the highest social sensitivity score and those whose members contributed more equally, - instead of one or two dominating the conversation.
In other words, “the most successful teams were those that were able to optimize communication within the group. If every team member was engaged and making many contributions, then the group was very likely to be successful. This also meant that members whose ideas and experience were different from the majority had the opportunity to contribute and be heard.” Such flexible group dynamics constitute the very essence of the human advantage over machines.
Subsequent studies have come up with similar findings. “The research results demonstrate that people who are especially adept at finding and maintaining connections across an organization are critical for opening up the channels needed to spread ideas more broadly across an organization. These cross-team ties help to break down silos and increase an organization's productivity and ability to innovate.”
An effective human-AI decision system should have access to large numbers of people with expertise in the problem being addressed, not only across the overall organization but beyond, since complex problems increasingly involve collaborations across multiple institutions. It would then use machine-learning-like AI algorithms to assemble the appropriate teams with the expertise required to address a particular complex problem, and provide them with the right tools to securely share data and ideas. “This approach should provide the organizational scale and flexibility required for cross-domain decisions and also the agility to interoperate at the speed of competition in the future.”
(second post?) Sounds like Agile Methodology.
Posted by: Tom Grey | November 28, 2017 at 08:30 PM