📢 Exclusive on Gate Square — #PROVE Creative Contest# is Now Live!
CandyDrop × Succinct (PROVE) — Trade to share 200,000 PROVE 👉 https://www.gate.com/announcements/article/46469
Futures Lucky Draw Challenge: Guaranteed 1 PROVE Airdrop per User 👉 https://www.gate.com/announcements/article/46491
🎁 Endless creativity · Rewards keep coming — Post to share 300 PROVE!
📅 Event PeriodAugust 12, 2025, 04:00 – August 17, 2025, 16:00 UTC
📌 How to Participate
1.Publish original content on Gate Square related to PROVE or the above activities (minimum 100 words; any format: analysis, tutorial, creativ
Former Google CEO Eric Schmidt: How artificial intelligence will change the way scientific research is done
Written by: Eric Schmidt
Source: MIT Technology Review
It's another summer of extreme weather, with unprecedented heatwaves, wildfires and floods hitting countries around the world. To meet the challenge of accurately predicting such extreme weather, semiconductor giant Nvidia is building artificial intelligence-powered "digital twins" of the entire planet.
The digital twin, called Earth-2, will use FourCastNet's predictions. FourCastNet is an artificial intelligence model that uses tens of terabytes of Earth system data to predict the weather for the next two weeks faster and more accurately than current forecasting methods.
A typical weather forecasting system is capable of generating about 50 forecasts for the week ahead. And FourCastNet can predict thousands of possibilities, accurately capturing rare but deadly disaster risks, giving vulnerable populations valuable time to prepare and evacuate.
The long-awaited revolution in climate modeling is just the beginning. With the advent of artificial intelligence, science is about to get more exciting -- and in some ways harder to recognize. The effects of this shift will go far beyond the lab; they will affect us all.
If we adopt the right strategy to address science's most pressing problems with sound regulation and appropriate support for innovative uses of AI, AI can rewrite the scientific process. We can build a future in which AI-powered tools not only free us from mindless and time-consuming labor, but guide us to creative inventions and discoveries, encouraging breakthroughs that would otherwise take decades to achieve.
In recent months, artificial intelligence has become almost synonymous with large language models, or LLMs, but in science there are many different model architectures that could have even greater impact. Much of the progress in science over the past decade has been made through small "classical" models that focus on specific problems. These models have led to profound improvements. More recently, large-scale deep learning models that have begun to incorporate cross-domain knowledge and generative AI have expanded the range of what is possible.
For example, scientists at McMaster University and the Massachusetts Institute of Technology have used AI models to identify antibiotics to fight a pathogen that the World Health Organization says is one of the world's most dangerous antibiotic-resistant bacteria for hospital patients. Additionally, Google's DeepMind model can control plasma in nuclear fusion reactions, bringing us closer to a clean energy revolution. And in healthcare, the U.S. Food and Drug Administration has approved 523 devices that use artificial intelligence, 75 percent of which are used in radiology.
Reimagining Science
Essentially, the scientific process we learned in elementary school will remain the same: conduct background research, identify a hypothesis, test it with an experiment, analyze the data collected, and draw a conclusion. But artificial intelligence has the potential to revolutionize how these components will look in the future.
AI is already changing the way some scientists conduct literature reviews. Tools like PaperQA and Elicit leverage LLMs to scan article databases and produce concise and accurate summaries of existing literature -- including citations.
Once the literature review is complete, scientists make hypotheses to be tested. The core job of LLMs is to predict the next word in a sentence, up to complete sentences and paragraphs. This technique makes LLMs particularly well-suited to address the scale inherent in scientific hierarchies and enables them to predict the next big discovery in physics or biology.
AI can also expand hypothetical search nets and shrink search nets faster. Thus, AI tools can help formulate stronger hypotheses, such as models that suggest more promising new drug candidates. Simulations now run orders of magnitude faster than they did just a few years ago, allowing scientists to try out more design options in simulations before conducting real-world experiments.
For example, scientists at the California Institute of Technology used artificial intelligence fluid simulation models to automatically design a better catheter that can prevent bacterial backflow and cause infection. This capability will fundamentally change the incremental process of scientific discovery, allowing researchers to design optimal solutions from the outset, unlike what we have seen for years with filament innovations in light bulb design. , progressing through a long chain of incrementally refined designs.
Entering the experimental step, artificial intelligence will be able to conduct experiments faster, cheaper, and on a larger scale. For example, we can build machines powered by artificial intelligence, with hundreds of microtubules running day and night, creating samples at a speed that humans cannot match. Instead of limiting themselves to six experiments, scientists can use AI tools to run a thousand experiments.
Scientists worried about the next grant, publication, or tenure process will no longer be tethered to the safe experiment with the highest chance of success; they will be free to pursue bolder, more interdisciplinary hypotheses. For example, when evaluating new molecules, researchers tend to stick to candidates that are structurally similar to those we already know, but AI models don't have to have the same biases and limitations.
Eventually, much of the science will be conducted in "autonomous labs" -- autonomous robotic platforms combined with artificial intelligence. Here, we can bring the capabilities of artificial intelligence from the digital realm into the physical world. Such automated labs are already popping up at companies like Emerald Cloud Lab and Artificial, and even Argonne National Laboratory.
Finally, in the analysis and summary phase, the automated lab will go beyond automation and use LLM to interpret and recommend the next experiment to run based on the experimental results produced. Then, as a partner in the research process, the AI lab assistant can order supplies to replace those used in earlier experiments and set up and run the next recommended experiment overnight. The results were ready while the experimenters were still at home sleeping.
Possibilities and Limitations
Young researchers might shiver nervously in their seats at the prospect. Fortunately, the new jobs emerging from this revolution may be more creative and less mindless than most current lab work.
AI tools can lower the barriers to entry for new scientists and open up opportunities for those traditionally excluded from the field. With LLMs able to assist in building code, STEM students will no longer need to master arcane coding languages, opening the ivory tower door to new, non-traditional talent and making it easier for scientists to gain exposure to fields outside their own. Soon, specially trained LLMs may go beyond providing first drafts of written work, such as grant proposals, and may be developed to provide "peer" reviews of new papers alongside human reviewers.
AI tools have incredible potential, but we must recognize where human contact still matters, and don’t get too high. For example, it is not easy to successfully merge artificial intelligence and robotics through automated laboratories. Much of the tacit knowledge scientists learn in the lab is difficult to transfer to AI-powered robotics. Likewise, we should be cognizant of the limitations of current LLMs, especially hallucinations, before we give them a lot of paperwork, research, and analysis.
Companies like OpenAI and DeepMind are still leading the charge with new breakthroughs, models, and research papers, but the current industry dominance won't last forever. So far, DeepMind has excelled at focusing on well-defined problems with clear goals and metrics. Its most famous success was at the biennial Critical Assessment of Structure Prediction competition, in which the research team predicted the exact shape of a protein based on its sequence of amino acids.
From 2006 to 2016, the average score for the hardest category was around 30 to 40 on a CASP scale of 1 to 100. Suddenly, in 2018, DeepMind's AlphaFold model scored a whopping 58 points. Two years later, an updated version called AlphaFold2 scored 87 points, leaving its human rivals further behind.
Thanks to open source resources, we are starting to see a pattern where industry hits certain benchmarks and then academia steps in to refine the model. After DeepMind released AlphaFold, Minkyung Baek and David Baker of the University of Washington released RoseTTAFold, which uses DeepMind's framework to predict the structure of protein complexes rather than the single protein structures that AlphaFold could initially handle. What's more, academia is more shielded from the competitive pressures of the marketplace, so they can venture beyond the well-defined problems and measurable successes that attracted DeepMind.
In addition to reaching new heights, AI can help validate what we already know by addressing scientific replicability crisis. About 70% of scientists reported that they were unable to replicate another scientist's experiments -- a depressing number. As AI reduces the cost and effort of running experiments, in some cases it will be easier to replicate results or draw conclusions that cannot be replicated, helping to improve trust in science.
The key to replicability and trust is transparency. In an ideal world, everything in science would be open, from articles without paywalls to open-source data, code, and models. Unfortunately, due to the dangers such models can pose, it is not always practical to open source all models. In many cases, the risks of full transparency outweigh the benefits of trust and fairness. Still, as long as we can be transparent about models -- especially classical AI models with more limited uses -- we should open source them.
Importance of Regulation
In all of these areas, the inherent limitations and risks of AI must be kept in mind. AI is such a powerful tool because it enables humans to accomplish more with less time, less education, and less equipment. But these capabilities also make it a dangerous weapon that could fall into the wrong hands. Andrew White, a professor at the University of Rochester, signed with OpenAI to participate in the "red team" test, which can expose the risks of GPT-4 before its release. Using language models and feeding them tools, White found that GPT-4 could suggest dangerous compounds and even order them from chemical suppliers. To test the process, he had a (safe) test compound shipped to his home the following week. OpenAI said it used White's findings to tweak GPT-4 before it was released.
Even humans with perfectly good intentions can still drive AI to produce bad outcomes. We should worry less about creating a Terminator and, as computer scientist Stuart Russell says, we should worry more about becoming King Midas. The king wanted everything he touched to be turned to gold, and because of this, an accidental embrace killed his own daughter.
We don't have any mechanism to cause an AI to change its goals, even if it responds to its goals in ways we can't predict. An oft-cited assumption is that the AI is asked to produce as many paperclips as possible. Determined to accomplish its goal, the model hijacks the power grid and kills any humans who try to stop it as the paper clips keep piling up. The world has become a mess. The AI pats its ass and walks away; it has done its job. (In homage to this famous thought experiment, many OpenAI employees carry branded paper clips with them).
OpenAI has managed to implement an impressive set of safeguards, but those will remain in place as long as GPT-4 is housed on OpenAI's servers. The day may come soon when someone manages to replicate the model and put it on their own server. Cutting-edge models like this need to be protected to prevent thieves from tearing down the AI security fences carefully added by their original developers.
To address the deliberate and unintentional bad uses of AI, we need sensible, informed regulation of tech giants and open-source models that don’t prevent us from using AI in ways that benefit science. While tech companies are making strides in AI safety, government regulators are currently ill-prepared to enact appropriate laws and should do more to stay abreast of the latest developments.
Outside of regulation, governments -- along with philanthropy -- can support scientific projects with high social returns but little financial return or academic incentive. Several areas are of particular urgency, including climate change, biosecurity and pandemic preparedness. It is in these areas that we most need the speed and scale provided by AI simulations and automated labs.
To the extent security considerations allow, governments can also help develop large, high-quality datasets such as the one AlphaFold relies on. Open datasets are public goods: they benefit many researchers, but researchers have little incentive to create them themselves. Governments and philanthropic organizations can collaborate with universities and companies to identify grand challenges in science that would benefit from the use of robust databases.
Chemistry, for example, has a language that unifies the field, which seems to help AI models analyze it easily. But no one has been able to properly aggregate molecular property data stored in dozens of databases, denying us insights into the field that AI models could achieve if we only had one source. At the same time, biology lacks known and computable data on which to base physics or chemistry, and subfields like intrinsically disordered proteins remain mysterious to us. As such, it will require a more concerted effort to understand -- and even document -- the data to build a comprehensive database.
The road to widespread adoption of AI in science is long, and we have to do a lot, from building the right databases to enforcing the right regulations, reducing bias in AI algorithms, to ensuring equal access to computing resources across borders.
Nonetheless, this is a very optimistic moment. Previous scientific paradigm shifts, such as the scientific process or the emergence of big data, were inward-looking and could make science more precise and organized. At the same time, AI is expansive, allowing us to combine information in novel ways and pushing scientific creativity and advancement to new heights.