Nvidia just posted the kind of quarter that makes even seasoned Wall Street analysts pause. Record revenue of $68.1 billion for the fiscal fourth quarter ended January 25, 2026 — up 73% from a year ago and 20% from the prior quarter. Earnings and guidance both topped expectations. The stock ticked higher in late trading, extending a rally that has made Nvidia the most consequential company in the global semiconductor industry and, arguably, in the broader technology economy.
The numbers are staggering in absolute terms. But what matters more to industry insiders is what they signal about the trajectory of AI infrastructure spending, the durability of demand for accelerated computing, and whether the biggest technology companies on earth are showing any signs of pulling back. They aren’t.
Data center revenue — the segment that accounts for the vast majority of Nvidia’s business — hit $62.3 billion in the quarter, up 75% year over year and 22% sequentially, according to CNBC’s report on the results. For the full fiscal year 2026, data center revenue reached $193.7 billion, a 68% increase. These are not incremental gains. They reflect a fundamental rewiring of how the world’s largest enterprises and cloud providers allocate capital.
Jensen Huang, Nvidia’s founder and CEO, framed the moment in characteristically bold terms. “Computing demand is growing exponentially — the agentic AI inflection point has arrived,” he said in the company’s earnings release. He pointed to Grace Blackwell with NVLink as “the king of inference today,” delivering what he described as an order-of-magnitude lower cost per token. And he previewed Vera Rubin, the next-generation platform, as the vehicle to extend that lead further still.
That’s not just marketing language. The partnerships and product announcements accompanying the results paint a picture of an enterprise AI buildout that is broadening, not narrowing. Nvidia disclosed a multiyear, multigenerational strategic partnership with Meta spanning on-premises, cloud, and AI infrastructure — including millions of Blackwell and Rubin GPUs. It expanded its relationship with Amazon Web Services across interconnect technology, cloud infrastructure, open models, and physical AI. It announced an investment and deep technology partnership with Anthropic, the maker of the Claude model, which is scaling on Microsoft Azure powered by Nvidia systems. And it strengthened a collaboration with CoreWeave to accelerate the construction of more than 5 gigawatts of AI factories by 2030.
Five gigawatts. That number alone tells you where this is heading.
Nvidia’s guidance for the first quarter of fiscal 2027 came in at $78 billion, plus or minus 2%. That figure beat consensus estimates and, crucially, does not assume any data center compute revenue from China. The company has effectively written off the Chinese market from its near-term outlook — a reflection of ongoing U.S. export controls — and is still projecting accelerating growth. Gross margins are expected to hold near 75%, a level that would be extraordinary for almost any hardware company at this scale.
Fortune reported on the signals embedded in Nvidia’s CFO commentary, highlighting the company’s framing that compute equals revenue — a shorthand for the idea that every dollar invested in AI infrastructure generates measurable economic returns for Nvidia’s customers. That equation, if it holds, explains why hyperscalers continue to pour tens of billions into GPU clusters without hesitation. It also explains why Nvidia’s revenue growth has defied the gravitational pull that typically slows companies of this size.
The Rubin platform, unveiled alongside the earnings report, represents Nvidia’s bid to stay ahead of an intensifying competitive field. Comprising six new chips, Rubin promises up to a 10x reduction in inference token cost compared with Blackwell. AWS, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure will be among the first to deploy Vera Rubin-based instances. That lineup of launch partners is notable — it’s essentially every major cloud provider committing to Nvidia’s next architecture before it ships.
Inference is the story now. Training large models consumed the first wave of GPU demand. But inference — running those models at scale, in production, for hundreds of millions of users — is where the volume economics kick in. Nvidia’s data show that leading inference providers including Baseten, DeepInfra, Fireworks AI, and Together AI have cut AI costs by up to 10x using open-source models on Blackwell hardware. Blackwell Ultra, the company says, delivers up to 50x better performance and 35x lower cost for agentic AI compared with the Hopper platform, based on SemiAnalysis InferenceX benchmark results.
Those benchmarks matter because they address the central question hanging over the AI infrastructure boom: Is this spending sustainable? If each new generation of hardware delivers a 10x or 50x improvement in cost-per-inference, the economic case for continued investment strengthens rather than weakens. Enterprises don’t buy GPUs for the sake of buying GPUs. They buy them because the math works.
And the math is working across an expanding set of industries. Nvidia announced a co-innovation AI lab with Eli Lilly to reinvent drug discovery. It expanded BioNeMo, its open development platform for AI-driven biology. It joined the U.S. Department of Energy’s Genesis Mission as a private industry partner. It launched Earth-2, a family of open models for AI weather prediction. In India, global systems integrators Infosys, Persistent, Tech Mahindra, and Wipro are building enterprise agents on Nvidia AI. Industrial software leaders Cadence, Siemens, and Synopsys are partnering with Nvidia to drive manufacturing applications.
This is no longer a story about a few hyperscalers buying chips for chatbots. It’s a story about accelerated computing becoming the default infrastructure layer for scientific research, drug development, autonomous vehicles, robotics, weather forecasting, and industrial design.
The automotive and robotics segment, while still small relative to data center, posted full-year revenue of $2.3 billion, up 39%. Nvidia unveiled the Alpamayo family of open AI models and simulation tools for autonomous vehicle development. It partnered with Mercedes-Benz on the new CLA, which features level 2 driver assistance powered by Nvidia DRIVE AV software. The DRIVE Hyperion platform expanded to include tier 1 suppliers and sensor partners like Bosch, Magna, Sony, and ZF Group. And in robotics, companies from Boston Dynamics to Caterpillar to LG Electronics are building on Nvidia’s Isaac GR00T stack.
Gaming, once Nvidia’s core business, generated $3.7 billion in the quarter — up 47% year over year but down 13% sequentially as channel inventory normalized after a strong holiday season. Full-year gaming revenue hit a record $16 billion, up 41%. Professional visualization revenue surged 159% year over year to $1.3 billion, driven by what the company called “exceptional demand for Blackwell.” The launch of the RTX PRO 5000 72GB Blackwell GPU for larger models and agentic workflows signals Nvidia’s intent to push workstation-class AI computing further into enterprise environments.
Capital allocation tells its own story. During fiscal 2026, Nvidia returned $41.1 billion to shareholders through buybacks and dividends. It still has $58.5 billion remaining under its share repurchase authorization. The company is generating cash at a pace that allows it to invest aggressively in R&D, forge partnerships across every major industry vertical, and still return enormous sums to investors. GAAP earnings per diluted share for the full year came in at $4.90.
One accounting change worth flagging: beginning in the first quarter of fiscal 2027, Nvidia will include stock-based compensation expense in its non-GAAP financial measures. The company described stock-based compensation as “a foundational component of NVIDIA’s compensation program to attract and retain world-class talent.” This is a meaningful shift in reporting methodology. It will bring Nvidia’s non-GAAP results closer to economic reality, though it will also compress reported non-GAAP margins slightly. The Q1 guidance already reflects this, with approximately $1.9 billion of stock-based compensation included in the $7.5 billion non-GAAP operating expense forecast.
So where does this leave the competitive picture? AMD continues to push its MI300 series and has made inroads with select hyperscalers. Custom silicon efforts from Google (TPUs), Amazon (Trainium), and Microsoft (Maia) are progressing. Groq, which focuses on inference-specific hardware, just entered into a non-exclusive licensing agreement with Nvidia — an unusual move that suggests even alternative chip architectures may end up orbiting Nvidia’s software and platform gravity. Broadcom and Marvell are building custom AI accelerators for specific cloud customers. But none of these efforts have dented Nvidia’s growth rate in any visible way. Not yet.
The $78 billion Q1 guidance implies Nvidia’s annualized revenue run rate is approaching $312 billion. For context, Intel’s total revenue for its most recent fiscal year was roughly $54 billion. Nvidia is now generating more data center revenue in a single quarter than Intel generates across its entire business in a year. The competitive dynamics in semiconductors haven’t shifted this dramatically since the rise of the mobile processor upended the PC-centric chip industry more than a decade ago.
Nvidia’s tax rate guidance for fiscal 2027 sits between 17% and 19%, excluding discrete items. Gross margins are expected to remain near 75%. These are the financial characteristics of a company with extraordinary pricing power and limited near-term competitive pressure on its core products.
But risks exist. Export controls could tighten further, closing off not just China but potentially other markets. A slowdown in hyperscaler capital expenditure — driven by macroeconomic conditions, rising interest rates, or a reassessment of AI’s near-term return on investment — would hit Nvidia disproportionately. The concentration of revenue among a handful of massive customers creates dependency risk. And the history of the semiconductor industry is littered with companies that dominated one computing era only to stumble in the transition to the next.
Nvidia’s answer to that last risk is to keep moving. Blackwell. Blackwell Ultra. Vera Rubin. Each generation arriving faster, each promising another order-of-magnitude improvement in performance per dollar. The company is simultaneously pushing into networking with BlueField-4, into storage with its new Inference Context Memory Storage Platform, into software with Nemotron open models and Cosmos simulation tools, and into every vertical from healthcare to energy to defense.
Jensen Huang called it “the AI industrial revolution.” The earnings report suggests that’s not hyperbole. It’s a description of what’s actually happening in Nvidia’s order book. The factories powering this transformation are being built at a pace measured in gigawatts, funded by companies that see AI compute not as a discretionary expense but as the primary driver of their future revenue. Nvidia sits at the center of that spending cycle, selling the picks and shovels — and increasingly, the blueprints for the mines themselves.
The question for investors and industry participants alike is no longer whether AI infrastructure spending is real. It’s whether it can continue compounding at this rate. Nvidia’s $78 billion guidance for next quarter is its answer. The market, for now, is inclined to believe it.