A Brief Perspective on the Artificial Intelligence Revolution

A Brief Perspective on the Artificial Intelligence Revolution

Xiaoliang Qi

Introduction

This article briefly introduces my perspective, as a theoretical physicist, on the artificial intelligence (AI) revolution and its applications in scientific research.

Background: The Impact of Large Language Models

Deep neural network-based AI has developed rapidly over the past decade, but the revolution brought by Large Language Models (LLMs) is particularly profound compared to previous advancements. In modern physics, information is increasingly recognized as fundamental, potentially serving as the underlying concept behind the laws of spacetime and matter. I believe the nature of this new AI revolution can be viewed through the lens of information.

When examining a complex system, the key lies in how its most critical information is controlled, carried, and processed—specifically, how the most complex information processing is accomplished and what carrier it utilizes. In Earth’s history, information carriers have undergone three major transformations:

  1. The Emergence of Life: DNA and RNA became information carriers, allowing complex traits and behaviors to be passed down and iteratively adjusted over generations.
  2. The Emergence of Human Language: Human experience, knowledge, and collective memory could be transmitted via language. Through exchange and dialogue, this enabled evolution and propagation of information that is much faster than biological evolution. With language, the progression of human history and civilization superseded DNA evolution as the world’s most critical information dynamics process.
  3. The Current AI Revolution: Starting with human language, all forms of information (images, video, etc.) are being unified within models into a universal “language.” For the first time in history, the most complex information processing is no longer the monopoly of the human brain. Previous information technology revolutions merely accelerated information transmission and filtering; the complex processing of information itself still relied on the human brain. Only explicitly defined calculations could be offloaded to machines via programming. Although today’s LLMs have not yet reached human-level general intelligence, their breadth and complexity in information processing are comparable to the human brain. Fundamentally, the AI revolution signifies that the complexity of machine information processing has reached a critical threshold. This is why the current revolution is an unprecedented event in human history, implying a redefinition of human civilization. If the emergence of human language accelerated evolution compared to DNA, the era of AI—characterized by Human-AI symbiosis—will likely drive civilization’s evolution on a timescale far faster than before.

AI for Science

Based on the observations summarized above, the AI revolution will bring fundamental changes to all fields. Among these, the transformation in scientific research is of particular foundational significance. The goal of science is to expand the frontiers of cognition and explore the most creative outcomes. Previously, technological development provided specialized tools for research but did not directly accelerate the innovation process itself. In contrast, LLMs mark the transition of AI from a human tool to a human collaborator.

Pain Points in Scientific Research

To understand what AI brings to research, we must review common problems currently facing scientific enquiry. While challenges vary by field, several are universal:

  1. Time Costs: Understanding industry progress and learning from others’ work requires immense time.
  2. Loss of Tacit Knowledge: A vast amount of “intermediate” experience and data accumulated during research is not reflected in papers, forcing other scholars to explore from scratch.
  3. Collaboration Limits: The scale of research collaboration is constrained by human communication costs, making large-scale and cross-disciplinary cooperation difficult.
  4. Administrative Burden: Significant time is consumed by writing papers, peer review, and explaining work to others after the research is completed.

These issues stem from a common cause: human knowledge is replicable, but human experience (know-how) is not. Because experience cannot be easily copied, every student must relearn what predecessors have already mastered, cross-field communication remains difficult, and results are not easily reproducible. If human experience could be replicated and distributed like explicit knowledge, it would fundamentally alter the speed and paradigm of research collaboration. This is the new possibility offered by the AI revolution.

Opportunities and Challenges in AI for Science

The application of LLMs in science is already underway, with AI agents assisting research in various fields such as biology, mathematics, chemistry, machine learning, etc. Overall, we are in the nascent stage of AI’s impact on research. As model capabilities improve and integration deepens, AI for Science is poised to have profound effects in the following areas:

  1. Agentification of Research Tools: Integrating AI models with professional research tools (e.g., computational software, experimental control software) allows AI to go beyond providing information to being “present” throughout the research process, while also generating valuable training data.
  2. Automation of Repetitive Work: This includes literature reviews and reproducible, non-creative parts of experimental and theoretical work, such as instrument debugging and targeted measurements.
  3. AI as an Innovative Collaborator: Building on the above, AI will join humans in creative work. Its role will shift from “tool” to “collaborator.” Cases of AI providing inspiration and new ideas are already appearing and will become the norm.
  4. New Cross-Disciplinary Collaboration: AI participation allows researchers from different backgrounds to collaborate more easily, fostering unforeseen opportunities. This requires new collaboration platforms, similar to how the World Wide Web and arXiv.org transformed research in previous eras.
  5. Agentification of Scientific Publishing: AI acting as a representative of the author can introduce results in diverse ways. Publishing may evolve from static papers to interactive AI agents containing all necessary information. These agents could tailor explanations to different readers or even conduct new research based on the original work. This implies a fundamental shift in academic evaluation systems currently based on paper impact.

However, to fully realize this potential, several challenges remain:

  1. Lack of Frontline Data: While models excel at textbook-level problems, they struggle with real research scenarios because training data does not cover the minute details of every vertical niche. Experts must lead AI in research to expose it to real data for specialized training.
  2. Lack of Real-Time Updates: In scientific research, new tools and concepts are constantly being invented, which cannot be rapidly mastered by models through training. AI needs the ability to learn continuously. Currently, context engineering (providing external information directly to the model) and protocols like the Model Context Protocol (MCP) are addressing this need by connecting AI to tools and knowledge.
  3. Need for Precise, Vertical Benchmarks: Current benchmarks focus on mainstream tasks (math, coding). Research requires specific benchmarks for sub-fields (e.g., quantum 2D materials, high-temperature superconductors). These must be developed by frontline experts, necessitating mechanisms for rapid development and iteration.

Building an Open Collaboration Platform in the AI Era

Addressing these challenges requires a shift in the mode of human-AI interaction from “Training” to “Teaching”. unlike traditional data accumulation, this resembles a human coach guiding AI. This requires an open platform for deep interaction between domain experts and AI. Such a platform should include:

  1. Data Sharing Platform: Facilitates sharing of intermediate research data while protecting intellectual property and academic credit.
  2. Tool Sharing and Co-development: Enables experts to easily connect existing tools (databases, software, experimental platforms) to AI.
  3. Benchmark Development Platform: Allows experts to collaboratively develop and update high-quality benchmarks for their fields.
  4. AI Agent Sharing Platform: As agents are iterated to suit specific domains, they should be shared as encapsulated “human experience,” allowing others to benefit directly.
  5. Smart Publishing Platform: Uses AI agents to represent and disseminate research interactively, replacing the current review/publish system. This involves exploring new methods for cross-validation, review, and IP recognition in the age of agents.

The open platform ai4.science represents our efforts in these directions, including the benchmark building platform bench.science, the research tool sharing platform mcp.science and the research agent platform lucien.science. Community efforts in developing AI research tools using mcp.science and lucien.science can be viewed on ai4.science/events.

Conclusion

In summary, the AI revolution will bring fundamental changes to human civilization, with the transformation of scientific innovation being a key direction. By making previously non-replicable human experience replicable, AI will not only accelerate research but also enable new forms of collaboration and publishing. In the future, AI will be a collaborator rather than a tool. To maximize its utility, we must develop new open platforms where human experience is continuously taught to AI during actual work, ultimately constructing a new research paradigm based on a Human-AI collaboration network.