As we approach the end of 2025, the dominant theme in tech remains the insatiable global hunger for computing power, underpinned by the strategic competition in AI. The conventional path has been a relentless pursuit of cramming more transistors onto smaller chips, chasing 3nm, 2nm, and even 1nm processes. This race depends entirely on cutting-edge EUV lithography machines, a technology tightly controlled and a key point of pressure in the tech rivalry.
However, a quiet but potentially monumental shift is emerging from a laboratory at Peking University. Researchers led by Professor Sun Zhong have achieved a significant breakthrough in a long-dormant field: analog computing. Their work centers on a new chip that operates on a fundamentally different principle, crucially, one that does not require advanced EUV lithography for production.
The core idea is revolutionary yet revisits an old concept. While today’s digital chips (CPUs, GPUs) process information as discrete 0s and 1s—like meticulously counting individual sticks—analog computing leverages the continuous properties of physical phenomena. It’s akin to judging the volume of two cups of water combined by observing the new water level. This method is inherently faster and vastly more energy-efficient for specific tasks.
The major historical roadblock for analog computing has been precision, or the lack thereof. Physical systems are prone to noise and variation—temperature changes can skew results. For tasks requiring absolute accuracy, like financial transactions, this was unacceptable, leading to the dominance of precise digital computing for decades.
The breakthrough from Peking University lies in shattering this “precision curse.” Professor Sun’s team employed a three-pronged approach to achieve unprecedented accuracy. First, they utilized novel materials like resistive random-access memory (RRAM), which naturally suits analog operations and enables “memory-compute” integration, slashing energy waste from data movement. Second, they innovated at the circuit design level. Third, and most crucially, they developed a sophisticated algorithmic technique involving iterative error correction. Imagine measuring a table: first with a meter ruler, then a centimeter ruler on the remainder, then a millimeter ruler, progressively refining the result. This method boosted the chip’s computational precision by five orders of magnitude, reaching a level comparable to the standard single-precision (FP32) calculations run on modern digital GPUs.
The strategic implications are profound. This analog AI chip is fabricated using a mature 28nm process—a node fully within China’s independent manufacturing capabilities. Its energy efficiency is orders of magnitude higher than digital chips for AI workloads like matrix operations, which form the backbone of large language model training. This presents a potential path to circumvent the dual bottlenecks of advanced lithography access and the unsustainable energy costs of massive AI data centers.
This is not about immediately replacing cutting-edge digital GPUs. The technology is in its infancy, facing scaling and ecosystem challenges. However, it represents a vital “technology reserve.” Just as the 2012 AI boom unexpectedly turned GPUs into the engine of deep learning, this analog computing breakthrough is planting a flag in a new computational paradigm. It demonstrates that in the global race for compute, there might be multiple paths forward. While continuing to advance in digital semiconductor manufacturing, parallel exploration in alternative computing architectures like analog, photonic, or neuromorphic systems could foster a more diverse and resilient technological ecosystem.

