Market Watch · MARKET × Sector Review · SECTOR

x86 CPU 在数据中心暗处被重新点亮

CPU’s Return: After Two Years of GPU-Only Narratives, Capital Markets Are Repricing x86 for the First Time

Approx. 2,400 words · ~8-minute read

On Friday, April 24, U.S. markets closed with an unusual event:AMD hit a new all-time high—despite no company news—driven solely by Intel’s earnings report the previous evening (after market close on April 23),Qualcomm rose in tandem. Intel and AMD have long been archrivals in public memory—but this time, they rose together.

What the market was doing was something else entirely: repricing the entire x86 server CPU segment.

CHAPTER 01

A New All-Time High With No Company News

At Friday’s April 24 close, AMD rose +13.8%, closing at $347.81. Intel rose +23.38%on Friday, briefly surging to +30%premarket. The Philadelphia Semiconductor Index (SOX) crossed 10,000 points for the first time in history.

The oddity isn’t the magnitude of the gain—it’s the reason behind it.AMD released no corporate news that day—no earnings, no product launch, no partnership announcement. Its entire rally was driven by Intel’s Q1 earnings report released after market close on April 23—and a research note from D.A. Davidson upgrading AMD from Neutral to Buy.

Intel’s results lifted AMD to a new all-time high—and pulled Qualcomm up with it. Such linkage is rare: Intel and AMD are remembered as bitter rivals.When one does well, the other usually loses share—but not this time.

April 24 · U.S. Market Close

AMD

+13.8%

$347.81 — All-Time High

INTEL

+23.38%

Premarket +30%

QUALCOMM

Rallied in tandem

Data Center CPU Revival

SOX Index

10,000+

Broke 10,000 for the first time

The entire x86 camp is being repriced—not just one company’s good news

After two years of GPU-only narratives dominating capital markets, CPUs are being valued independently for the first time.

CHAPTER 02

What Was in Intel’s Earnings Report?

The numbers alone were striking: Q1 revenue of $13.58B, versus consensus of $12.36B. Non-GAAP EPS of $0.29, versus consensus of $0.013—a roughly 22-fold beat.

But what lifted the entire sector wasn’t the quarterly result—it was forward guidance.

Intel raised its full-year 2026 server CPU unit shipment forecast from flat to double-digit unit growth. Same management, same fabs, same product lines—yet within six months, the full-year shipment outlook shifted from 0% to over 10% growth.That scale of revision signals a material misreading of demand structure six months ago.

CEO Lip-Bu Tan explained the rationale on the earnings call. His exact words:

"The CPU is reasserting itself as the indispensable foundation of the AI era."

— Lip-Bu Tan, Intel Q1 2026 Earnings Call

Data Center and AI (DCAI) segment revenue for the quarter was $5.1B, up year-on-year +22%. This is Intel’s cleanest earnings report in five years. Capital markets responded directly: at Friday’s open, all x86 players rallied together.

CHAPTER 03

Why Did CPUs Suddenly Come Back?

To understand this repricing, we must return to an industry metric rarely discussed outside engineering circles—CPU:GPU Ratio

For the past two years, AI training dominated, and the typical ratio hovered near 1:8—one CPU per seven to eight GPUs. The reason is simple: large-model training relies heavily on GPU matrix multiplication; CPUs merely feed data and orchestrate tasks. CPUs served as GPU assistants.

In inference, the ratio has already tightened to 1:3 to 1:4. Running models—not training them—requires more logic evaluation, task routing, and context management, all of which fall back to the CPU.

Entering the Agentic AI phase, Tan stated on the call that the ratio may compress further— 1:1"even tilting toward the CPU side"

CPU : GPU Ratio Evolution

Training Era

1 : 8

CPU feeds data / orchestrates
GPU runs matrix multiplication

Inference Era

1 : 4

Logic evaluation / routing
returns to CPU

Agentic Era

1 : 1

Orchestration / tool invocation
CPU regains center stage

Dark = CPU · Blue = GPU. From GPU dominance to CPU parity—this is the curve Tan sketched on the call.

TrendForce’s research report confirms the trend numerically: in certain agent workflows, tool invocation and task orchestration latency can account for up to 90.6%of end-to-end total latency—and these are CPU tasks. During high-volume inference, CPU dynamic power consumption can reach 44%

Omdia’s April report spells out the shift even more starkly: frontier large-model scaling has slowed to just 5%per year. What’s expanding rapidly are mid- and small-scale models—and Agent systems. The conclusion: AI infrastructure demand is shifting from "models on GPUs" to"models on GPUs, agents on CPUs, long-context workloads on separate hardware"

The logic chain is now clear: Agents aren’t trained models—they’re persistent inference engines performing tool scheduling and context maintenance. These are CPU-native workloads. More agents mean greater CPU scarcity.

Readers familiar with "Hermes and the Real Interface of Agents" and "One-Person AI Companies" will recognize that the industrial shifts described there—agent workflows, MCP, individuals using LLMs to orchestrate full toolchains—map directly onto this hardware-level shift.CPU demand curves are rising due to these applications—and capital markets are only now catching up.

CHAPTER 04

Threefold Supply Tightening

Demand-side repricing is one side of the story—the other is simultaneous threefold tightening on the supply side.

Threefold Supply Tightening · Occurring Concurrently

01

TSMC Advanced Nodes

Prioritized for AI accelerators (NVIDIA, AMD MI series, Google TPU); EPYC wafers from the same node are being squeezed out

+

02

Intel’s In-House Fab Allocation

18A yield stabilized above 60% ahead of schedule; capacity reallocated toward data centers, shrinking consumer PC chip production share

+

03

Meta Pre-Locks Capacity

Signed a 6 GW × 5-year agreement on Feb 24—valued at ~$60B (full-warrant estimate: ~$100B)—bundling Venice EPYC CPUs

Three layers overlapping—not isolated stress. Capacity scarcity compounds itself.

Layer One: TSMC.Advanced-node capacity is now prioritized for AI accelerators (NVIDIA, AMD MI series, Google TPU). EPYC wafers from the same node are being displaced. Channel inventory for AMD’s 4th-gen Genoa and 5th-gen Turin is thinning.

Layer Two: Intel’s Own Fab.Intel’s 18A yield stabilized at over 60%+in early 2026—with the target achieved ahead of schedule—meaning Xeon 6+ (Clearwater Forest) is ramping. But Intel’s total fab capacity is finite, so it made an explicit trade-off: capacity allocation strongly favors data centers. Tan’s most consequential capacity reallocation since taking over—prioritizing Xeon shipments—consumer PC chips remain in focus, but their share of production lines has shrunk.

Layer Three: Major Customers Pre-Locking Capacity.On February 24, Meta signed a 6 GW, 5-yearAI infrastructure agreement with AMD—valued by independent analysts at ~ $600B(full-warrant estimate: ~$1000B). Two details the market grasped only later: First, it bundles MI450 GPUs and sixth-gen EPYC "Venice" CPUs—making Meta Venice’s first customer; second, AMD granted Meta performance-based warrants covering 160 millionshares of AMD stock, unlocked in tranches tied to GPU shipment milestones.

A significant portion of AMD’s server CPU capacity over the next five years is now pre-locked by Meta. Other hyperscalers have no choice but to follow—capacity scarcity compounds itself.

The impact of this triple squeeze shows up starkly in hard numbers. In February, Intel notified Chinese customers that delivery lead times for select 4th- and 5th-gen Xeon SKUs would extend to six months, with price hikes of over 10%. Intel internally acknowledged implementing"rationing"—allocating shipments by customer size. AMD EPYC lead times in China are also lengthening, with channel inventory visibility shrinking to just a few weeks.

Server CPUs · Pricing & Lead Times

Price

+20%

Cumulative increase since March
(TrendForce, Apr 22)

Intel has signaled another hike in H2
Price hike expected +8-10%

Lead Time

2 weeks → 6 months

Chinese customers, Xeon 4th/5th gen
(Intel notice, Feb)

Intel has implemented internal
rationing

Simultaneous deterioration in pricing and lead times—the two clearest signals of capacity shortage

These figures aren’t isolated. They’re two sides of the same coin as "U.S. plans to build 12 GW of new data centers by 2026—roughly half of which will be canceled or delayed." Transformers, switchgear, batteries, CPUs, memory—any single bottleneck halts the entire data center.AI infrastructure bottlenecks have shifted from the singular problem of "can’t buy GPUs" to the systemic problem of "can’t assemble a full stack."

CHAPTER 05

Who Wins This Repricing?

During the two strongest years of GPU narratives, Intel was widely written off by markets as an "AI orphan": weak in-house GPU efforts, Gaudi accelerators lagging NVIDIA, foundry business unprofitable.Its 2024 share price fell -47%—its worst annual decline in 50 years.This repricing proves the opposite:

Under capacity scarcity, pricing power doesn’t always go to the technology leader—it can go to the capacity holder.Intel’s advantage lies in owning its fabs—and in improving 18A yields. At a time when TSMC’s advanced nodes are saturated by AI accelerators, these assets—once seen as liabilities—have become strategically scarce resources."Intel is a fundamentally different company today."

— Lip-Bu Tan

Markets responded with a +23% after-hours surge.

AMD occupies a more nuanced position—it lacks its own fabs, yet delivers the strongest x86 product in EPYC—and benefits from anchor customers like Meta locking in orders early. Markets are repricing AMD as a dual-benefit play: MI450 GPUs sold to Meta *and* Venice EPYC CPUs sold to everyone else.

Qualcomm is the second growth vector.This year, it formally relaunched its data center CPU business, deploying Phoenix cores acquired via Nuvia, positioned to complement NVIDIA GPUs. Its first customer is the Saudi Humain joint AI data center. This path represents ARM’s deeper penetration into the server market—adding an independent player beyond Amazon Graviton and Microsoft Cobalt.

Looking eastward, domestic CPU vendors are gaining opportunity.As Intel pushes Chinese delivery lead times to six months and raises prices by over 10%, Hygon’s next-generation C86 is targeting +15% IPC and aiming to match Xeon 6; Loongson’s 3C6000 is slated for 2026 volume production, matching third-gen Xeon Scalable performance—while domestic telecom operators have begun bulk migration. Domestic industry research firms project China’s IT innovation hardware market will grow from ~ RMB 214.6B in 2022 to ~ RMB 788.9Bin 2026—a more than threefold expansion, partly driven directly by this supply crunch and geopolitical overlay.

Four Winner Categories in This Repricing

INTEL

Capacity holder rebounds

Own fabs + 18A yield >60% achieved ahead of schedule—reversal from "AI orphan"

AMD

Dual GPU + CPU exposure

Meta locks 6 GW × 5 years; Venice EPYC ramps in parallel

QUALCOMM

ARM server penetration

Phoenix core + first order with Saudi Humain, co-deployed with NVIDIA GPUs

Domestic CPUs

Replacement window opens

Hygon C86, Loongson 3C6000: China’s domestic innovation hardware market to reach RMB 788.9B by 2026

No longer 'NVIDIA alone dominates' — four legs simultaneously capturing the era’s tailwinds

CHAPTER 06

Industrial implications of the re-rating

Viewed over a longer time horizon, this surge corresponds clearly to underlying industrial shifts.

The AI infrastructure story is evolving from 'buying GPUs' to a heterogeneous arms race: 'buying CPUs + GPUs + power + components'.Two years ago, NVIDIA was unavoidable in any discussion of AI investment themes — its CUDA ecosystem, HBM memory, and TSMC’s advanced nodes formed layered moats. Today, NVIDIA remains central, but single-stock exposure is no longer sufficient: data centers may stall not for lack of GPUs, but due to transformer shortages, unavailability of Xeon CPUs, or insufficient grid capacity.

Capital markets’ recognition of this shift has lagged. After GPU-related narratives surged through 2024–2025, valuations for CPU makers, power equipment suppliers, and interconnect players remain anchored at 'traditional industry' levels.That begins to change this week.

From a hardware perspective, the Agent era and the large-model training era demand fundamentally different compute architectures — the former is more distributed, more reliant on orchestration, and more CPU-cycle-intensive; the latter is more centralized, more reliant on parallelism, and more GPU-array-intensive.

This means winners in the Agent era will differ from those in the training era. NVIDIA still dominates the high end, but x86, ARM, and custom silicon camps will each claim portions of the market they missed during the training era.

For practitioners in China, the operational implications may be even more direct.First, on the supply chain side,the window for domestic CPU import substitution is wider than expected;second, on the product side,teams building Agent applications will face a new constraint: CPU resource scarcity — affecting both hardware procurement and cloud service pricing;third, from an investment perspective,investors tracking AI themes must shift from 'GPU only' to a four-legged framework: 'GPU + CPU + power + networking'.

Friday’s record high — achieved without any company-specific news — does not mark an AMD milestone. It marks the first crack in the GPU-only narrative: capital markets are beginning to acknowledge that the AI compute landscape is more complex than imagined over the past two years.

Primary sources

Intel Newsroom · CNBC · Fortune · Yahoo Finance · TipRanks (earnings & stock price)

TrendForce, April 24, 2026 / April 22, 2026 / February 6, 2026 (CPU:GPU ratios, pricing, lead times)

Omdia, April 2026 (slowing frontier models + Agent expansion)

AMD Press Release, February 24, 2026 · Tom's Hardware (Meta-AMD 6GW agreement details)

DigiTimes · Wccftech · Tom's Hardware (Qualcomm, Hygon, Loongson)