Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

How scientists are using artificial intelligence

In both instances, Scientists employed an AI model to search through millions of potential compounds and identify those most effective against these “superbugs”. The model was trained on the chemical structures of a few thousand known antibiotics and their efficacy against bacteria in lab experiments. During training, the model established connections between chemical structures and their ability to damage bacteria. Once the AI generated a shortlist, scientists conducted lab tests to identify potential antibiotics. According to Regina Barzilay, a computer scientist at MIT involved in the search for abaucin and halicin, if discovering new drugs is like finding a needle in a haystack, AI acts as a metal detector. The transition from the lab to clinical trials for candidate drugs will require several years. Nevertheless, AI significantly accelerated the initial trial-and-error phase of the process. Dr. Barzilay said that AI changes what is possible and will lead to new types of scientific inquiries that differ from those currently pursued.”

The impact of AI extends beyond drug discovery. Researchers tackling complex and important problems, such as weather forecasting, the search for new materials, and controlling nuclear fusion reactions, are turning to AI to enhance and expedite their progress.

The potential of AI is vast. Demis Hassabis, co-founder of Google DeepMind, an AI lab based in London, argues that AI could bring about a new era of discovery by acting as a multiplier for human ingenuity. He compares AI to a telescope, an essential tool that enables scientists to see further and deepen their understanding beyond what is possible with the naked eye alone.


View Full Image

(Graphic: The Economist)

The Importance of AI in Various Disciplines

Although AI has been a part of the scientific toolkit since the 1960s, it remained largely confined to disciplines where researchers were already proficient in computer programming, such as particle physics and mathematics. However, by 2023, thanks to the rise of deep learning, over 99% of research fields yielded AI-related results, according to CSIRO, Australia’s science agency (see chart). Mark Girolami, chief scientist at the Alan Turing Institute in London, attributes this explosion to the democratization of AI. Previously, utilizing AI tools necessitated expertise in computer science and specialized programming languages. Today, user-friendly AI tools, often accessible through platforms like ChatGPT, OpenAI’s chatbot, enable scientists to perform tasks that previously required a computer science degree and extensive coding knowledge. Consequently, scientists now have access to a tireless research assistant capable of solving equations and sifting through vast amounts of data in search of patterns and correlations.

In the field of materials science, for instance, researchers face a problem similar to drug discovery—there are countless potential compounds to explore. When scientists at the University of Liverpool sought materials with specific properties for better batteries, they employed an AI model called an “autoencoder” to search through the Inorganic Crystal Structure Database, which contains 200,000 known, stable crystalline compounds. The AI model had previously learned the crucial physical and chemical properties that the new battery material needed to possess. By applying these conditions to the search, the AI successfully narrowed down the candidates for lab experimentation from thousands to just five, saving valuable time and resources.

The final candidate—a composite material comprised of lithium, tin, sulfur, and chlorine—was novel, but its commercial viability remains to be determined. Nevertheless, researchers are employing the same AI method to discover other types of new materials.

The Predictive Power of AI

AI also serves as a predictive tool. The structural conformation that proteins adopt after their synthesis is crucial to their functioning. Scientists have yet to fully comprehend the process of protein folding. However, in 2021, Google DeepMind developed AlphaFold, a model capable of predicting protein structure solely from the amino acid sequence. AlphaFold has already produced a database of over 200 million predicted protein structures, which has been utilized by over 1.2 million researchers. For instance, a biochemist at the University of Oxford named Matthew Higgins employed AlphaFold to determine the shape of a protein in mosquitoes that plays a crucial role in the transmission of the malaria parasite. This information enabled Higgins to identify the protein regions most vulnerable to drug targeting. Another team used AlphaFold to determine the structure of a protein that influences the proliferation of a type of liver cancer in just 30 days, paving the way for the design of targeted treatments.

Moreover, AlphaFold has contributed to understanding other biological phenomena. For instance, by harnessing AlphaFold’s predictions, scientists gained insight into the structure and internal mechanisms of the gates in a cell’s nucleus, which regulate the entry of materials required for protein production. According to Pushmeet Kohli, one of AlphaFold’s inventors who leads Google DeepMind’s “AI for Science” team, the process behind AlphaFold’s generation of accurate structures remains somewhat opaque. Nonetheless, the generated structures serve as foundations upon which the scientific community can build.

AI also proves to be invaluable in expediting complex computer simulations. Weather models, for instance, rely on mathematical equations that describe Earth’s atmospheric conditions at any given time. However, the supercomputers used for weather forecasting are costly, power-intensive, and time-consuming. These models require repeated execution to keep pace with the constant influx of data from weather stations worldwide.

In response, climate scientists and private entities are turning to machine learning to accelerate processes. Huawei, a Chinese company, developed Pangu-Weather—an AI capable of making weather predictions up to a week in advance, thousands of times faster and cheaper than the prevailing standard, without sacrificing accuracy. Nvidia, an American chipmaker, designed FourCastNet—a model capable of providing such forecasts in under two seconds. This model is also the first to accurately predict rain at a high spatial resolution, which is vital for anticipating natural disasters like flash floods. Both AI models are trained on observational data and supercomputer simulations. Furthermore, Nvidia announced plans to construct a digital replica of Earth called “Earth-2,” enabling predictions of climate change at a regional level several decades in advance.

Physicists striving to harness the power of nuclear fusion have also turned to AI for assistance in controlling complex equipment. One approach involves confining hydrogen plasma within a doughnut-shaped vessel called a tokamak, as fusion reactions occur when the plasma reaches temperatures around 100 million degrees Celsius. To maintain the plasma’s integrity, physicists employ a magnetic cage. Determining the optimal configuration of magnetic fields is extremely challenging, akin to holding a lump of jelly with knitting wool. Manual control necessitates creating mathematical equations to predict plasma behavior and making thousands of small adjustments per second across multiple magnetic coils. In contrast, an AI control system designed by scientists at Google DeepMind and EPFL in Lausanne, Switzerland, allows for extensive experimentation with different plasma shapes using computer simulations. The AI then identifies the most effective methods of achieving the desired shapes.

Automation and acceleration of physical experiments and laboratory work represent another area benefiting from AI. “Self-driving laboratories” can plan and execute experiments using robotic arms, followed by automated analysis of results. Through automation, the process of discovering new compounds or improving existing ones can be accelerated by up to a thousand-fold.

The Role of Generative AI

Generative AI, which gained attention with the introduction of ChatGPT in 2022, has two primary scientific applications. Firstly, it can generate data. AI models capable of “super-resolution” can enhance low-resolution electron microscope images into high-resolution ones, making expensive high-resolution imaging unnecessary. The AI accomplishes this by comparing high-resolution and low-resolution recordings of a small area of a material or biological sample, learning the distinctions between the two resolutions, and translating between them.


View Full Image

(Graphic: The Economist)

Additionally, just as large language models (LLMs) can generate coherent sentences by predicting the next appropriate word, generative molecular models can construct molecules atom by atom and bond by bond. LLMs utilize self-taught statistics and trillions of words from internet sources to generate human-like text. Similarly, models focused on “de novo molecular design” identify molecular structures most likely to exhibit desired properties using vast databases of known drugs and their properties. Verseon, a California-based pharmaceutical company, has created drug candidates using this approach. Several of these candidates are currently undergoing animal testing, and one—a precise anticoagulant—is already in Phase I clinical trials. Like AI-identified antibiotics and battery materials, chemically designed compounds will necessitate real-world trials to assess their effectiveness.

LLMs also hold potential in more futuristic applications. Igor Grossmann, a psychologist from the University of Waterloo, envisions using LLMs fitted with realistic (or fabricated) backstories to replace focus groups or serve as agents in economics research. LLMs trained with various personas could simulate experiments, and interesting results could subsequently be validated with human subjects.

LLMs are already increasing researchers’ efficiency. GitHub reports a 55% increase in coding speed when utilizing tools like “Copilot.” The volume of scientific literature that must be read before embarking on a project can be daunting to scientists. The expansive modern scientific literature exceeds manageable levels for individuals. As a solution, Ought, an American non-profit research lab, developed Elicit—an AI tool that employs LLMs to summarize important research papers more efficiently than any human could. Students and young scientists already benefit from Elicit, as it aids in finding relevant papers for citation or defining research directions amidst a vast expanse of text. LLMs can even extract structured information, such as experimental data on specific drugs, from millions of documents.

AI’s potential to democratize knowledge within disciplines should not be overlooked. Each detector at CERN’s Large Hadron Collider necessitates specialized teams to operate and analyze data. However, combining and comparing data from multiple detectors is challenging without collaboration among physicists from each detector. This poses difficulties for theoretical physicists aiming to test new ideas promptly. To overcome this, Miguel Arratia—a physicist from the University of California, Riverside—proposes using AI to integrate measurements from various fundamental physics experiments and cosmological observations. The integrated data would enable theoretical physicists to explore, combine, and repurpose it for their work.

While AI models have demonstrated their ability to process data, automate calculations, and even assist with lab work (see table), Dr. Girolami cautions that AI struggles to tackle problems beyond the boundaries of existing knowledge. These models excel at interpolation—connecting known data points—but are less proficient at extrapolation and predicting new data points.

Certain challenges still elude even the most successful AI systems to date. AlphaFold, for example, does not consistently predict protein structures with complete accuracy. Structural biologist Jane Dyson from the Scripps Research Institute in California highlights that AlphaFold’s predictions often fall short for “disordered” proteins, which are relevant to her research. She emphasizes that the technology is not revolutionary enough to render scientists obsolete. Additionally, AlphaFold has yet to elucidate why proteins fold in specific ways. Nonetheless, Dr. Kohli suggests that the AI may possess theories that have yet to be comprehended by scientists.

Despite these limitations, structural biologists acknowledge that AlphaFold has improved their efficiency. The database of protein predictions generated by AlphaFold facilitates the rapid determination of protein structures within seconds, a process that would have previously taken years and tens of thousands of dollars.

The acceleration of scientific research and discovery, coupled with increased efficiencies, holds tremendous potential. According to the OECD’s report on AI in science, “while AI is penetrating all domains and stages of science, its full potential is far from realized.” The report concludes that accelerating research productivity could be the most economically and socially valuable application of artificial intelligence.”

Embracing the Potential of AI

If AI tools succeed in enhancing research productivity, Dr. Hassabis’ prediction of AI as a “multiplier for human ingenuity” would undoubtedly materialize. However, AI offers even greater potential. Just as telescopes and microscopes enable scientists to perceive more of the world, AI’s probabilistic, data-driven models allow for better modeling and understanding of complex systems. Fields like climate science and structural biology have hitherto relied on top-down approaches, employing rules, equations, and simulations to comprehend intricate processes known to occur. AI enables scientists to adopt a bottom-up approach—collecting extensive data first and employing algorithms to derive rules, patterns, equations, and scientific understanding.

If the past few years saw scientists cautiously dip their toes into the waters of AI, the next decade and beyond will require them to dive headfirst into its depths and swim toward the horizon.

Curious about the world? To enjoy our expansive science coverage, sign up for Simply Science, our exclusive weekly newsletter.

© 2023, The Economist Newspaper Limited. All rights reserved. From The Economist, published under license. The original content can be found on www.economist.com



This post first appeared on Recut URL Shortener, please read the originial post: here

Share the post

How scientists are using artificial intelligence

×

Subscribe to Recut Url Shortener

Get updates delivered right to your inbox!

Thank you for your subscription

×