A recent issue of The Economist included a special focus on “How AI Can Revolutionize Science,” with three articles on the topic. “Debate about artificial intelligence (AI) tends to focus on its potential dangers: algorithmic bias and discrimination, the mass destruction of jobs and even, some say, the extinction of humanity,” noted the issue’s lead article. “As some observers fret about these dystopian scenarios, however, others are focusing on the potential rewards. AI could, they claim, help humanity solve some of its biggest and thorniest problems. And, they say, AI will do this in a very specific way: by radically accelerating the pace of scientific discovery, especially in areas such as medicine, climate science and green technology.”
“Could they be right?,” asked the The Economist, reminding us that in the 1990s we had high hopes that the internet would reduce inequality and eradicate nationalism by transforming the world into a connected global village. Instead, internet-based social media platforms have been blamed for fueling political and ethnic polarization around the world.
“But the mechanism by which AI will supposedly solve the world’s problems has a stronger historical basis, because there have been several periods in history when new approaches and new tools did indeed help bring about bursts of world-changing scientific discovery and innovation.”
In a presentation at the 2013 MIT CIO Symposium, professor Erik Brynjolfsson pointed out that throughout history new tools beget revolutions. Scientific revolutions have been launched when new tools enable new kinds of measurements and observations. In 1676, for example, Antonie van Leeuwenhoek used a microscope, a relatively recent and rare tool, to discover the existence of microorganisms in a drop of water, leading over time to major discoveries in biology, medicine, public health and food production.
A more recent example, — one that I’m personally quite familiar with, — has been the impact of increasingly powerful computers on scientific research over the past several decades. The key event that led to my long career with computers took place during the summer of 1962 right before entering college at the University of Chicago. Planning to major in math and physics, I wanted to get a summer job in the university’s research labs.
Through a variety of lucky circumstances, I learned that a new computation center was being started at the university. I went over, and met its director, physics and chemistry professor Clemens Roothaan, one of the true pioneers in the use of computers in scientific research. Even though I knew nothing about computers, — few 17 year olds did in 1962, — I ended up getting a summer job in the new computation center.
I went on to become a graduate student in physics at the University of Chicago with Professor Roothaan as my thesis advisor. My research mostly involved the use of computers in various kinds of atomic and molecular calculations. After getting my degree, I joined the computer sciences department at IBM’s research labs, the beginning of my 37 year career with the company. One of my positions in IBM was leading our new parallel supercomputing initiative in the early 1990s, as part of which I worked closely with leading edge users of supercomputers in business, government, and universities.
“AI tools and techniques are now being applied in almost every field of science, though the degree of adoption varies widely: 7.2% of physics and astronomy papers published in 2022 involved AI, for example, compared with 1.4% in veterinary science,” said the Economist. “AI is being employed in many ways. It can identify promising candidates for analysis, such as molecules with particular properties in drug discovery, or materials with the characteristics needed in batteries or solar cells.”
“How scientists are using artificial intelligence,” a second article in the Economist issue, noted that AI “is already making research faster, better, and more productive,” while explaining some of the recent AI-based achievements.
In drug discovery, AI helped find new antibiotics, salicin and abaucin, for use against two of the most dangerous known antibiotic-resistant bacteria. “In both cases, the researchers had used an artificial-intelligence (AI) model to search through millions of candidate compounds to identify those that would work best against each ‘superbug’. The model had been trained on the chemical structures of a few thousand known antibiotics and how well (or not) they had worked against the bugs in the lab. During this training the model had worked out links between chemical structures and success at damaging bacteria. Once the AI spat out its shortlist, the scientists tested them in the lab and identified their antibiotics.”
“If discovering new drugs is like searching for a needle in a haystack, AI acts like a metal detector.” said Regina Barzilay, an MIT computer scientist who was a member of both the salicin and abaucin research teams. “To get the candidate drugs from lab to clinic will take many years of medical trials. But there is no doubt that AI accelerated the initial trial-and-error part of the process. It changes what is possible,” added Barzilay. With AI, “the type of questions that we will be asking will be very different from what we’re asking today.”
Materials science is another important area where researchers have turned to AI to help accelerate their progress in the search for new materials for batteries. The problem is similar to that in drug discovery, — analyzing a large number of possible compounds.
“When researchers at the University of Liverpool were looking for materials that would have the very specific properties required to build better batteries, they used an AI model known as an ‘autoencoder’ to search through all 200,000 of the known, stable crystalline compounds in the Inorganic Crystal Structure Database, the world’s largest such repository. The AI had previously learned the most important physical and chemical properties required for the new battery material to achieve its goals and applied those conditions to the search. It successfully reduced the pool of candidates for scientists to test in the lab from thousands to just five, saving time and money. The final candidate—a material combining lithium, tin, sulphur and chlorine — was novel, though it is too soon to tell whether or not it will work commercially.”
Other projects discussed in the article include AlphaFold, an AI model for predicting protein structures to help understand the internal mechanisms of cells; and Pangu-Weather, a deep learning based system for fast, inexpensive and accurate global weather forecasts about a week in advance.
The third article in the Economist issue asked “Could AI transform science itself?” The article notes that the advent of scientific journals and books, which let researchers share their findings and build on each other’s results, played a crucial role in the surge of scientific advances in the 16th and 17th centuries known today as the scientific revolution. These advances were made possible by the introduction of the printing press by Johannes Gutenberg around 1440, which accelerated the spread of knowledge and literacy in Renaissance Europe.
Gutenberg’s printing revolution influenced almost every facet of life in the centuries that followed. The very breadth of the printing press, as well as of digital information on the internet, makes comparison with Large Language Models (LLMs) almost unavoidable. Books, journals, and articles have significantly expanded the knowledge we’ve all had access to, helping us generate much more knowledge and new kinds of disciplines. Similarly, LLMs trained on a given body of knowledge, can now help us derive and generate all kinds of additional knowledge.
“A further transformation began in the late 19th century, with the establishment of research laboratories — factories of innovation where ideas, people and materials could be combined on an industrial scale. This led to a further outpouring of innovation, from chemicals and semiconductors to pharmaceuticals. These shifts did more than just increase scientific productivity. They also transformed science itself, opening up new realms of research and discovery.”
“How might AI do something similar, not just generating new results, but new ways to generate new results?,” asked The Economist. The article cites literature-based discovery (LBD) as a promising approach for new discoveries by analyzing the scientific literature to suggest new potential relationships between existing knowledge. “Literature-based discovery aims to discover new knowledge by connecting information which have been explicitly stated in literature to deduce connections which have not been explicitly stated,” notes the LBD Wikipedia article.
Originally suggested in the 1980s by Don Swanson, LBD had to wait for AI systems to become much more powerful, more capable at natural-language processing, and to have a much larger corpus of digital scientific literature to analyze. In 2019, researchers at the UC Berkeley National Lab showed that a machine-learning algorithm with no training in materials science was able to uncover new scientific knowledge by scanning 3.3 million abstracts of published papers in materials science.
In a recent paper, two University of Chicago sociologists extended the LBD approach by suggesting that instead of just focusing on concepts within papers, LBD should focus on both the papers’ concepts and their authors. The resulting system was twice as good at forecasting new discoveries in materials science that was previously reported by the Berkeley Lab team. In addition, “LBD systems that take authorship into account can also suggest potential collaborators who may not know each other,” a kind of scientific dating service. “This approach could be particularly effective when identifying scientists who work in different fields, bridging complementary areas of research.”
“Scientific journals changed how scientists discovered information and built on each other’s work,” said The Economist in conclusion. “Research laboratories scaled up and industrialised experimentation. By extending and combining these two previous transformations, AI could indeed change the way science is done.”
Comments