Modern artificial intelligence is a product of decades of painstaking scientific research. Now, it’s starting to pay that effort back by accelerating progress across academia.
Ever since the emergence of AI as a field of study, researchers have dreamed of creating tools smart enough to accelerate humanity’s endless drive to acquire new knowledge. With the advent of deep learning in the 2010s, this goal finally became a realistic possibility.
Between 2012 and 2022, the proportion of scientific papers that have relied on AI in some way has quadrupled to almost 9 percent. Researchers are using neural networks to analyze data, conduct literature reviews, or model complex processes across every scientific discipline. And as the technology advances, the scope of problems they can tackle is expanding by the day.
The poster boy for AI’s use in science is undoubtedly Google DeepMind’s Alphafold, whose inventors won the 2024 Nobel Prize in Chemistry. The model used advances in transformers—the architecture that powers large language models—to solve the “protein folding problem” that had bedeviled scientists for decades.
A protein’s structure determines its function, but previously the only way to discover its shape was with complex imaging techniques like X-ray crystallography and cryo-electron microscopy. Alphafold, in comparison, could predict the shape of a protein from nothing more than the series of amino acids making it up, something computer scientists had been trying and failing to do for years.
This made it possible to predict the shape of every protein known to science in just two years, a feat that could have transformative impact on biomedical research. Alphafold 3, released in 2024, goes even further. It can predict both the structure and interactions of proteins, as well as DNA, RNA, and other biomolecules.
Google has also turned its AI loose on another area of the life sciences, working with Harvard researchers to create the most detailed map of human brain connections to date. The team took ultra-thin slices from a 1-millimeter cube of human brain and used AI-based imaging technology to map the roughly 50,000 cells and 150 million synaptic connections within.
This is by far the most detailed “connectome” of the human brain produced to date, and the data is now freely available, providing scientists a vital tool for exploring neuronal architecture and connectivity. This could boost our understanding of neurological disorders and potentially provide insights into core cognitive processes like learning and memory.
AI is also revolutionizing the field of materials science. In 2023, Google DeepMind released a graph neural network called GnoME that predicted 2.2 million novel inorganic crystal structures, including 380,000 stable ones that could potentially form the basis of new technologies.
Not to be outdone, other big AI developers have also jumped into this space. Last year, Meta released and open sourced its own transformer-based materials discovery models and, crucially, a dataset with more than 110 million materials simulations that it used to train them, which should allow other researchers to build their own materials science AI models.
Earlier this year Microsoft released MatterGen, which uses a diffusion model—the same architectures used in many image and video generation models—to produce novel inorganic crystals. After fine-tuning, they showed it could be prompted to produce materials with specific chemical, mechanical, electronic, and magnetic properties.
One of AI’s biggest strengths is its ability to model systems far too complex for conventional computational techniques. This makes it a natural fit for weather forecasting and climate modeling, which currently rely on enormous physical simulations running on supercomputers.
Google DeepMind’s GraphCast model was the first to show the promise of the approach, which used graph neural networks to generate 10-day forecasts in one minute and at higher accuracy than existing gold standard approaches that would take several hours.
AI forecasting is so effective that it has already been deployed by the European Center for Medium-Range Weather Forecasts, whose Artificial Intelligence Forecasting System went live earlier this year. The model is faster, 1,000 times more energy efficient, and has boosted accuracy 20 percent.
Microsoft has created what it calls a “foundation model for the Earth system” named Aurora that was trained on more than a million hours of geophysical data. It outperforms existing approaches at predicting air quality, ocean waves, and the paths of tropical cyclones while using orders of magnitude less computation.
AI is also contributing to fundamental discoveries in physics. When the Large Hadron Collider smashes particle beams together it results in millions of collisions a second. Sifting through all this data to find interesting phenomena is a monumental task, but now researchers are turning to AI to do it for them.
Similarly, researchers in Germany have been using AI to pore through gravitational wave data for signs of neutron star mergers. This helps scientists detect mergers in time to point a telescope at them.
Perhaps most exciting though, is the promise of AI taking on the role of scientist itself. Combining lab automation technology, robotics, and machine learning, it’s becoming possible to create “self-driving labs.” These take a high-level objective from a researcher, such as achieving a particular yield from a chemical reaction, and then autonomously run experiments until they hit that goal.
Others are going further and actually involving AI in the planning and design of experiments. In 2023, Carnegie Mellon University researchers showed that their AI “Coscientist,” powered by OpenAI’s GPT-4, could autonomously plan and carry out the chemical synthesis of known compounds.
Google has created a multi-agent system powered by its Gemini 2.0 reasoning model that can help scientists generate hypotheses and propose new research projects. And another “AI scientist” developed by Sakana AI wrote a machine learning paper that passed the peer-review process for a workshop at a prestigious AI conference.
Exciting as all this is though, AI’s takeover of science could have potential downsides. Neural networks are black boxes whose internal workings are hard to decipher, which can make results challenging to interpret. And many researchers are not familiar enough with the technology to catch common pitfalls that can distort results.
Nonetheless, the incredible power of these models to crunch through data and model things at scales far beyond human comprehension remains a vital tool. With judicious application AI could massively accelerate progress in a wide range of fields.
Source link
#Dream #Scientist #Closer