OmahaLine
AVGOBROADCOM INC.Nasdaq
$422.76+0.00%52w $184.02-$429.31as of 8:00 PM UTC
Generated Apr 24, 2026

AVGO — BROADCOM INC.

Broadcom has assembled two franchises that are, in their respective markets, genuinely irreplaceable: custom AI silicon that the world's largest hyperscalers need and cannot meaningfully source elsewhere, and enterprise virtualization software embedded so deeply in data-center architecture that migration represents months of engineering risk few technology executives are willing to accept. At roughly 31 times next-twelve-month earnings on a business compounding earnings at 50% annually with $73 billion in contracted AI backlog providing 18 months of revenue certainty, the multiple is high but the growth makes it defensible. Compelling at the current price.


The current debate about artificial intelligence has cleaved the semiconductor industry into two distinct worlds. In the first, companies making general-purpose compute chips are selling into a feeding frenzy — constrained supply, hyperscaler queuing, gross margins expanding as fast as data halls can be built to house the hardware. In the second, quieter world, a more durable transformation is underway: the hyperscalers themselves, having spent two years paying premium prices for a single vendor's GPU architecture, are investing seriously in custom silicon designed for their specific workloads. General-purpose chips serve every customer reasonably well. A custom chip, co-designed over two to three years to a single customer's inference stack, serves that customer extraordinarily well and nobody else at all. That insularity is the moat — and it is the animating logic of Broadcom's AI business.

The commercial case for custom silicon is straightforward at the scale hyperscalers operate. A chip optimized for transformer-model inference delivers roughly 30–40% better power efficiency than a general-purpose GPU for that specific task. At fleets of hundreds of thousands of chips, those efficiency gains compound into billions of dollars of annual savings on electricity and cooling. The incentive to design custom silicon exists for every hyperscaler with enough volume to justify the multi-year engineering program. The limiting factor is not the desire but the capability: advanced packaging, chiplet integration, and the software toolchains needed to bring a bespoke accelerator from architectural specification to production silicon at TSMC's 2nm node require engineering talent and manufacturing relationships that only a handful of companies in the world have assembled. Broadcom is the dominant external partner for that work.

The custom AI application-specific integrated circuit market has grown from a niche engineering exercise into a multi-hundred-billion-dollar strategic priority. Google has been designing custom silicon for AI workloads since the original Tensor Processing Unit in 2015. Meta's MTIA accelerator program is in its second generation with a third in active co-development. ByteDance, Amazon, Microsoft, and OpenAI have all pursued proprietary silicon strategies at various stages of maturity. The transition from training-focused compute toward inference-focused architecture — where custom chips deliver their greatest efficiency advantages — is accelerating this demand. Management estimates that inference workloads will represent approximately 70% of AI compute by 2027, precisely the application where Broadcom's co-designed ASICs outperform general-purpose alternatives most dramatically.

Broadcom's only meaningful competitor in custom AI silicon is Marvell Technology. Marvell's AI revenue reached approximately $1.2 billion in fiscal 2024 — roughly one-sixth of what Broadcom generated from AI alone in that year. The asymmetry compounds through attachment revenue: Broadcom is simultaneously the dominant supplier of the networking silicon that connects AI chips within data-center clusters. The customer that buys a Broadcom custom accelerator almost always buys Broadcom's Ethernet switches and optical interconnects alongside it. Marvell cannot offer this end-to-end infrastructure proposition. No other competitor is positioned to.

The enterprise virtualization market — the second pillar of Broadcom's business — has a different character. It is mature, measured in tens of billions of dollars annually, and dominated by a single franchise that Broadcom acquired in November 2023. VMware's hypervisor and private cloud stack runs beneath a substantial fraction of enterprise computing globally. Organizations run mission-critical workloads on VMware not because it is the cheapest option but because re-platforming a portfolio of enterprise applications requires months of engineering time, operational risk, and service disruption that most technology executives are unwilling to accept without compelling cause. This reluctance is the software's moat — not innovation, but inertia made rational by risk.

Broadcom is a company Hock Tan built through acquisition. The business that exists today bears little resemblance to the wireless component supplier Tan controlled in 2006. Through a disciplined sequence of transactions — Brocade, CA Technologies, Symantec's enterprise security division, and finally VMware — Tan assembled a portfolio of durable, cash-generative franchises in semiconductor infrastructure and enterprise software. The model is consistent across all of them: acquire a market-leading asset in a structurally defensible position, restructure the cost base, convert customers to subscription contracts, harvest free cash flow, and redeploy into the next acquisition. The VMware deal, completed for $61 billion with $8 billion in assumed debt, was Broadcom's largest transaction and the most consequential. It doubled the company's revenue and established a software segment now generating $27 billion annually at operating margins approaching 80%.

The combined entity generated $63.9 billion in revenue in fiscal 2025, split between $36.9 billion in semiconductor solutions and $27 billion in infrastructure software, producing $26.9 billion in free cash flow — a 42% conversion rate that reflects both the capital-light nature of fabless semiconductor manufacturing and the near-pure-margin economics of renewal-based enterprise software. In Q1 fiscal 2026, revenue reached $19.3 billion with AI semiconductor revenue of $8.4 billion growing 106% year-over-year, and management guided Q2 to $22 billion with AI revenue projected at $10.7 billion. Annualized, this implies an $88 billion revenue run-rate within two years of a business that generated $52 billion in fiscal 2024.

The moat question — whether Broadcom's position in AI silicon is durable or merely a function of being early and large — is the most important analytical question for the investment case. The mechanism that excludes competitors in custom silicon is the co-design cycle. Developing an advanced AI accelerator for a hyperscaler customer is not a transaction; it is a multi-year joint engineering program. Broadcom engineers embed in the customer's technical teams during the architecture phase, iterating on chip floor plans and memory hierarchy designs before a production specification is finalized. The resulting chip contains IP that is neither entirely Broadcom's nor entirely the customer's — it is the product of a collaborative process that, once completed through a product generation, creates an information advantage for the incumbent in designing the next generation. A potential competitor entering the relationship cold must re-derive architectural knowledge the incumbent already possesses. Combined with multi-generational contractual commitments — Google's TPU partnership extends to 2031, Meta's MTIA program commits through 2029 — the switching cost is structural.

In networking silicon the moat is simpler and older: Broadcom's Tomahawk and Jericho product families dominate data-center switching with over 80% share in high-performance Ethernet infrastructure. The Tomahawk 6 delivers 102.4 terabits per second of switching capacity in a single chip. No competitor is shipping an equivalent product. The Jericho4, which enables distributed AI computing across multiple data centers with 3.2 terabits-per-second of HyperPort connectivity, is currently the only commercially available solution capable of connecting a million-XPU cluster across geographically distributed facilities. This dominance did not emerge from better marketing; it emerged from a decade of silicon investment in a market most chip companies decided was insufficiently large to pursue. The AI infrastructure buildout grew around Broadcom's installed position rather than Broadcom growing into the AI market.

The VMware moat is real but under observable pressure. The pricing decisions made post-acquisition — some customers report cost increases ranging from 800% to 1,500% — have set a migration clock for a meaningful portion of the customer base. Nutanix, the primary commercial beneficiary of VMware dissatisfaction, reports having migrated over 30,000 customers from VMware through April 2026. Gartner projects VMware's enterprise virtualization market share declining from roughly 70% today to approximately 40% by 2029. These numbers are genuine, and the infrastructure software segment's revenue growth — which looks impressive at 26% year-over-year in fiscal 2025 — should be understood in context: it is substantially a function of aggressive price increases on a customer base that is simultaneously contracting. At the largest enterprise level, where 90-plus percent of the top 10,000 customers have already converted to VMware Cloud Foundation subscription contracts, the calculation favors staying. The churn is concentrated among smaller customers for whom Broadcom was willing to accept the loss.

Company Gross Margin Custom AI ASIC Share Software Op. Margin FCF Margin
Broadcom 77% ~70% 78% ~42%
Marvell Technology ~50% ~20% N/A ~20%
Intel ~39% Negligible N/A Negative

The financial profile is exceptional across every dimension that matters. Adjusted EBITDA margins have expanded from approximately 62% in fiscal 2024 to 68% in fiscal 2025 and held there consistently through the most recent reported quarter. Free cash flow reached $26.9 billion in fiscal 2025 — up 39% year-over-year — against capital expenditures of only $623 million, roughly 1% of revenue. The business manufactures nothing: all silicon production occurs at TSMC, and the engineering investment that drives competitive differentiation flows through the income statement as R&D rather than capital spending. This is free cash flow that is real, recurring, and structurally growing.

The debt load warrants acknowledgment. Broadcom carries approximately $63.8 billion in long-term debt, the legacy of financing the VMware acquisition, against $14.2 billion in cash. At a free cash flow run-rate approaching $35–40 billion annually, the net leverage is manageable and declining. The VMware acquisition has already justified its cost: infrastructure software revenues of $27 billion at 78% operating margins represent an annual operating contribution of approximately $21 billion from an asset acquired for $69 billion including assumed debt, implying a payback period on acquisition cost, measured in operating income, inside four years. The debt is worth contextualizing against the asset it purchased.

GAAP and non-GAAP figures diverge materially. GAAP diluted EPS for fiscal 2025 was $4.77, while non-GAAP adjusted EPS strips out primarily stock-based compensation and acquisition-related intangibles amortization. Annual stock-based compensation runs approximately $3–4 billion. Amortization of intangibles from the VMware acquisition adds several billion more annually. Investors relying solely on non-GAAP figures should understand that stock-based compensation is real economic dilution — it is the cost of retaining the engineers who build the moat, and it does not disappear because accountants are permitted to exclude it from adjusted metrics.

Hock Tan's capital allocation record is one of the best in technology over the past twenty years. The thesis has been consistent: identify businesses with durable market positions in essential infrastructure, acquire them at prices the market has undervalued, consolidate product portfolios, shift customers to recurring contracts, and extract free cash flow at rates the original management never considered achievable. The 5-year total shareholder return through fiscal 2025 exceeded 1,000%. The VMware integration exemplifies the pattern: Broadcom consolidated VMware's product catalog from approximately 170 offerings to two principal products, converted 90% of the top 10,000 enterprise customers to subscription contracts, and expanded infrastructure software operating margins from the mid-20% range under independent VMware to 78% today — in two years.

The regulatory environment introduces an underappreciated risk to the software margin structure. The CISPE coalition — 46 European cloud infrastructure companies — has filed suit in the EU General Court seeking to annul the original regulatory approval of the VMware acquisition, citing alleged abuse of dominance and pricing increases in the 800–1,500% range. A separate EU antitrust probe opened in February 2026 focuses on VMware licensing restrictions. These proceedings could take one to two years to resolve. The worst-case outcome — forced restoration of pre-acquisition licensing terms for European customers or mandated softening of the subscription-only model — would create a material headwind to the infrastructure software margin assumptions embedded in the bull case. This is not a theoretical risk; it is an active legal process with a credible coalition of complainants.

Management compensation is high and linked to the right outcomes. Tan's fiscal 2025 total compensation of $205 million consists of 98% in stock awards, with long-term grants tied to achieving $120 billion in cumulative AI product sales by 2030. The structure aligns Tan's personal wealth directly with the AI revenue outcome the bull case depends on.

The growth runway is the core of the investment case, and the numbers are remarkable enough to warrant careful examination rather than dismissal as AI hype.

Period Total Revenue ($B) AI Revenue ($B) AI % of Revenue Infra. Software ($B) Adj. EBITDA Margin FCF ($B)
FY2024 51.6 ~8 ~15% 21.4 ~62% ~19
FY2025 63.9 20 31% 27.0 ~68% 26.9
Q1 FY2026 (ann.) ~77 ~34 ~44% ~27 ~68% ~32
Q2 FY2026 guided (ann.) ~88 ~43 ~49% ~68% ~40
FY2027E (mgmt. SAM) 60–90

AI revenue has grown from approximately $8 billion in fiscal 2024 to $20 billion in fiscal 2025 to a quarterly run-rate of $8–11 billion in fiscal 2026 — an annualized pace of $40–44 billion within eighteen months of the fiscal 2025 close. This growth is not the product of adding new customers at the margin; it is the product of three hyperscaler programs scaling from initial deployment toward full-fleet installation. The $73 billion AI backlog as of Q1 fiscal 2026 represents contracted future revenue extending roughly 18 months forward — it is not a pipeline estimate or analyst projection, but purchase commitments already signed.

The structural driver of AI revenue acceleration is the transition of hyperscaler AI infrastructure from GPU-dense training clusters toward inference-optimized architectures. Model training, the use case where NVIDIA has historically dominated, is a one-time or infrequent activity per model generation. Inference — running trained models in production to serve user requests — is continuous and scales with user traffic. Custom ASICs deliver their greatest efficiency advantages specifically in inference. As the installed base of large language models shifts from development to deployment at scale, the economics of custom silicon improve relative to general-purpose GPUs, and Broadcom's revenue follows.

The penetration argument is more precise than the headline share numbers suggest. Broadcom currently generates its $20 billion AI revenue base primarily from three identified core customers: Google, Meta, and ByteDance, each in the process of deploying one million XPU clusters by 2027. Three additional customers — Amazon, Microsoft, and OpenAI — are in various stages of custom silicon engagement that have not yet reached full-scale production revenue. The company is in active engagement with four additional hyperscalers not yet publicly identified. Broadcom's $60–90 billion serviceable addressable market estimate for fiscal 2027 is derived exclusively from the three core customers' deployment plans — it does not assume any material contribution from the six programs currently in various stages of engagement. The company has captured substantial revenue from three of approximately nine active hyperscaler AI programs. If two or three of the additional engagements reach scale on the same trajectory as the first three, the SAM estimate is conservative.

At approximately $420 per share, Broadcom carries a market capitalization of roughly $2 trillion and an enterprise value of approximately $2.05 trillion. The company's Q2 fiscal 2026 revenue guidance of $22 billion, annualized, implies a forward revenue run-rate of approximately $88 billion. At 68% adjusted EBITDA margins, this implies roughly $60 billion in annual EBITDA — an EV/EBITDA multiple of approximately 34 times on a forward basis. On a free cash flow basis — assuming 45% FCF conversion on $88 billion in annualized revenue, or approximately $40 billion — the stock trades at roughly 51 times forward free cash flow. On next-twelve-month earnings, the forward P/E is approximately 31 times.

These multiples are high in absolute terms. The defense of them rests on three observations. First, the $73 billion AI backlog provides 18 months of forward revenue certainty — contracted demand, not analyst extrapolation, that materially reduces near-term execution risk. Second, the earnings growth rate makes the forward multiple look less severe than the headline figures suggest. If AI revenue approaches $60–70 billion in fiscal 2027 as the company's own SAM target implies, and infrastructure software holds near current levels, total revenues could approach $90–100 billion with EBITDA margins likely expanding toward 70%. Free cash flow in that scenario approaches $55–65 billion, implying an EV/FCF of 31–37 times on the out-year — reasonable for a business with these structural characteristics. Third, the combination of genuine irreplaceability in both core positions — no competitor has Broadcom's ASIC co-design depth, no competitor has VMware's enterprise penetration — warrants a premium to commodity semiconductor multiples that the history of this business suggests is sustainable.

The bear case worth taking seriously is the cyclicality argument. Hyperscaler capital expenditure booms have historically been followed by inventory digestion cycles during which demand decelerates sharply. The $73 billion backlog provides confidence through approximately mid-2027; beyond that, the thesis depends on whether the next generation of hyperscaler AI infrastructure buildout arrives on schedule and whether Broadcom has won those next-generation socket designs. Losing even one of the three core customers to in-house silicon development or a competitor would remove $7–8 billion in annual AI revenue at current run-rates. The VMware erosion risk — Gartner's projection of market share declining from 70% to 40% by 2029 — adds a second headwind that would suppress infrastructure software growth even as AI revenues expand.

The intelligent bear argues that both risks materialize simultaneously: AI capex moderates after the initial deployment wave, and VMware churn accelerates as EU regulatory pressure forces pricing concessions. In that scenario, revenue growth decelerates sharply from the current 30% pace toward single digits, and the 34x forward EV/EBITDA multiple re-rates violently. The answer to this bear case is the backlog: $73 billion in signed purchase commitments means the first part of this scenario — AI deceleration — cannot arrive before mid-2027 at the earliest, and the hyperscaler guidance for 2026 capital expenditure (Google, Meta, Amazon, and Microsoft collectively guiding to over $300 billion combined for the year) suggests the next replenishment cycle is already underway before the current backlog is consumed.

A position in Broadcom at current prices is a claim on two things: that the AI infrastructure buildout is structural rather than cyclical — that hyperscalers are not over-building capacity in a way that produces a multi-year digestion cycle — and that Hock Tan continues to execute the same playbook that has compounded shareholder value at over 1,000% in five years. Both are reasonable assumptions. Neither is a certainty. What the $73 billion backlog provides is the rarest thing in investing: not a certainty, but a legitimate claim to 18 months of visibility in a business otherwise susceptible to demand volatility. What would change the conclusion is either a meaningful reduction in hyperscaler AI capex guidance beyond fiscal 2026 — which would signal that the backlog replenishment cycle is weaker than current commitments imply — or a forced EU remedy that materially alters VMware's subscription economics.

The cash is real. The backlog is real. The moat in both businesses is real. At 31 times next-twelve-month earnings for a business growing earnings north of 50% annually with 18 months of contracted revenue and a CEO who has never been wrong about the value of the businesses he buys, this is one of the rare cases where paying a premium multiple is the honest thing to do.

Was this analysis useful?

Related Companies

TSMMUASMLPLTRSMCI
Your Pile
<- newer