In its early decades, IT helped companies become significantly more productive by automating many of their processes, reengineering their overall operations, and reaching out beyond the boundaries of the enterprise. But, their actual products were largely unaffected. This has all been changing in the 21st century with the rise of the Internet of Things (IoT), Big Data, and Artificial Intelligence, whose combined impact is driving a new era of smart connected products.
As the world’s digital and physical infrastructures converge, digital technologies are designed right into complex products, — e.g., jet engines, power generators, medical equipment, energy pipelines. Massive amounts of usage data can now be gathered over the internet, stored and analyzed by sophisticated applications to help monitor the product and anticipate potential problems. This is particularly important in industries where safety by design can have a significant economic impact and in some cases, literally save lives. Safety by design features are now in widespread use in all kinds of mechanical, electronic, and other complex physical systems.
What if we could similarly embed regulatory objectives directly into the technical design of AI systems?, asked Robert Mahari in his keynote on Regulation by Design at the recent 2024 MIT IDE Annual Conference. Mahari is pursuing a joint JD-PhD degree. He received a JD at the Harvard Law School in 2022 and is now a PhD candidate in the MIT Media Lab research group led by professor Alex ‘Sandy’ Pentland.
“Compliance and regulation by design represent a risk-management paradigm that’s uniquely suited for AI,” Mahari said. “Intelligent technology design can proactively prevent failures and risks.”
AI systems are the result of a very complex supply chain, including the choice of the large amounts of data needed to train AI models, and the difficulty of testing, predicting, and explaining the models’ expected behavior. As a result, compliance with regulatory and ethical objectives is very hard for the users of AI-based applications who have little control or understanding of how the AI system works. Instead, as is the case with advanced engineering systems, can an AI system monitor how it’s used, identify high risk sessions, alert its developers and overseers, and thus ensure compliance?