Irving Wladawsky-Berger

A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

ABOUT

Subscribe to this blog via email

A few weeks ago I wrote about Leveraging AI to Transform Government based on a 2018 report by the Partnership for Public Service and the IBM Center for the Business of Government.  The report illustrates how government is using AI by describing four use cases being pursued by different agencies.  In 2019, the Partnership and IBM collaborated on two additional reports, Assessing the Impact of AI on the Work of Government, and Building Trust, Managing Risks.

This first report is focused on the impact of AI on the federal workforce, based on an analysis of the jobs and tasks most susceptible to automation, as well as on roundtables and interviews with over 40 AI experts.  The overriding conclusion is that, over time, AI will have a deep impact on how government works.  More than 80 different federal occupations, employing over 130,000 workers, are likely be significantly transformed, including 30,000 in the Treasury Department, 16,000 in Defense, and 5,000 in Transportation, as well as a variety of occupations in independent agencies like the Securities and Exchange Commission and the Government Publishing Office.

In particular, AI will have a major impact in three key areas: the transformation of the federal workday, the potential for personalized customer service and the increased importance of technical and data skills.  Let me summarize the findings and recommendations in each of these areas.

AI Will Transform the Federal Workday.  As is the case in the private sector, one of AI’s initial benefits in government is the automation of routine administrative tasks.  “Over time, federal employees will spend less time on repetitive administrative work and more of their workday on tasks that are core to their agencies’ missions, from mitigating hazards in workplaces to following up on complicated applications for grants or other government services.  In the long term, the transformation of federal work will likely go beyond automating the routine and will impact the nature of jobs.”

To help manage this transformation, government leaders should communicate often with employees to explain the potential of AI to alter their work using concrete examples from early adopters.  The discussions must include the kind of work employees can start doing in place of the more tedious, repetitive tasks that AI is automating.

AI Will Enable More Personalized Services to Agency Customers.  A number of surveys have found that leveraging digital technologies to provide a superior customer experience is one of the top priorities in the private sector.  For the most part, government lags in this area.  “On average, federal employees now spend only 2 percent of their time communicating with customers and other people outside their agencies, or less than one hour in a workweek.”  If, as predicted, AI will decrease the time spent on administrative work, federal employees should be able to focus more of their attention on providing a superior, personalized experience to their constituents.

Customer service skills will become more important.   “Federal employees should receive training that emphasizes skills for handling interactions with agency customers with the help of AI.  ‘Social literacy’ entails skills such as active listening, communication, critical thinking, negotiation, persuasion, reading comprehension and writing.”

AI Will Put Technical and Data Skills Front and Center.   Creating, understanding, managing and working with AI requires technical, digital and data literacy that much of the workforce currently lacks…   As the government relies more on AI, federal employees will need a shared understanding about the technical, societal, economic and governmental aspects of AI and the data it relies on.”

The federal government should emphasize expertise in digital, data and AI skills.  This requires sufficient funding for AI and related technical training.  Given the scarcity of AI talent, the federal government should establish specialized teams to help federal agencies that needs their expertise for AI projects.  Government hiring rules should make it easier to hire top talent from the private sector into such AI teams.

This second 2019 report is focused on understanding and addressing AI risks, based on lessons from companies and countries around the world.  Top AI challenges include bias, security, transparency, employee knowledge, and federal budget and procurement processes.  Let me briefly summarize each of these issues.

Bias.  Garbage in, garbage out applies as much to AI today as it has to computing since its early years.  Given that AI algorithms are trained using the vast amounts of data collected over the years, if the data include past racial, gender, or other biases, the predictions of these AI algorithms will reflect these biases.

These are serious issues in areas like predictive policing, – the use of data and AI algorithms to automatically predict where crimes will take place and/or who will commit them, – as well as in the use of AI by courts and correction departments to assist in bail, sentencing, and parole decisions  “To address AI bias, federal organizations need employees with technical acumen and data analysis and interpretation skills who can detect data bias and inaccuracies,” and who understand how AI algorithms work and how conclusions are reached.

Security.  As the economy moves toward a world where interactions are primarily governed by digital data, our existing methods of managing data security are proving inadequate.  Large-scale fraud, identity theft and data breaches are becoming common.  AI needs strong cybersecurity to protect against vulnerabilities and threats from bad actors, who could alter or corrupt the training data, or reveal personally identifiable information that was supposed to be anonymized

Before we put AI algorithms in control of high-stakes decisions, we must be much more confident that these systems can survive large scale cyberattacks.  Methods for protecting AI systems include  “assigning human beings to monitor AI for integrity and attacks and enlisting employees to purposely attack systems to identify and fix vulnerabilities.”

Transparency.  People need to understand how an AI-based systems made a prediction or arrived at a conclusion.  “The AI research and development community recognizes that transparency will promote trust in AI systems.  Researchers are looking into explainable AI and making AI algorithms and results less of a black box. This will enable governments and others that incorporate AI into their processes to respond to questions about the decisions involving AI technology.” 

Employee Knowledge.  Hiring or training employees  who understand how to use AI responsibly should be a high priority, but government often falls short on training and eduction due to funding and other challenges.  “The federal government should emphasize expertise in technical, digital and data skills. It should provide extensive and ongoing training to employees so they can create, understand, manage and work with AI technology.”

Federal Budget and Procurement Processes.  “Outdated federal acquisition and budget processes prevent agencies from buying and deploying new technology quickly and efficiently… Additionally, the typical acquisition process involves purchasing a finished product or service, yet many AI applications are iterative, improving over time through experience with more and more data and evolving with technological advances.”  Agencies should take full advantage of the flexibilities available in budget and procurement processes to acquire the necessary AI technologies.

Posted in , , , , , , , , , ,

Leave a Reply

Discover more from Irving Wladawsky-Berger

Subscribe now to keep reading and get access to the full archive.

Continue reading