Skip to Content

Intel's Data Center Demise Is Overstated

We think customers will look past the nanometer headlines.

The primary issue is that Intel’s first 10-nanometer server products, code-named Ice Lake, won’t hit the market until early 2020. Consequently, the market has assumed that AMD’s 7-nanometer Epyc server central processing unit, code-named Rome and set for a 2019 launch, will be immediately superior to Intel’s offerings at the time of launch. However, we don’t necessarily think this is the case, as we equate Taiwan Semiconductor Manufacturing Co.’s TSM 7-nm process to Intel’s 10-nm process, rather than a sign that AMD and TSMC have leapt past the semiconductor titan. Historically, the marketing and engineering definitions for the number used in process nodes were in agreement. More recently, the two have diverged to the point of no return. We consider TSMC’s 10-nm process to be more comparable to Intel’s 14-nm, as TSMC’s 16-nm was just a repurposing of its 20-nm process with FinFET (3D transistors) while Intel pioneered FinFET with its 22-nm process.

While AMD will enjoy a headline process technology advantage, we doubt it will be able to equal Intel’s expansive custom CPU business tailored to the cloud or its recent work in artificial intelligence. Competition is certainly a positive in the x86 server space and will be well received by cloud and enterprise customers alike, particularly concerning pricing.

However, AMD remains a fraction of the size of Intel, and we don’t believe it has the scale or bandwidth to manage the customization that major cloud vendors demand. This fact is reflected by our wide moat rating for Intel and no-moat rating for AMD. Meanwhile, head-to-head comparisons of public offerings from Intel and AMD don’t tell the whole story, as an increasing percentage of Intel’s Xeon volume sold to cloud vendors is customized, with product specifications kept under wraps.

As such, we hesitate to prematurely conclude that AMD’s theoretical 7-nm server chip performance is superior to Intel’s 14-nm Cooper Lake server chips to be launched in 2019. Overall, we expect AMD to capture some share (about 5%-10%), but we do not think that this will represent the high-value areas that Intel has been catering to and aggressively targeting (cloud and AI), or that AMD will gain sufficient market share in the server space to validate its current stock price.

Today, Intel has low-volume high-end PC chips produced at the 10-nm node. Historically, the higher-volume PC processor products would dominate initial process node transitions, with Intel improving product yields to a healthy threshold (85%-plus) to reduce the overall manufacturing cost component. Subsequently, higher-margin server CPUs would be ported over to the new process once it had reached cost parity with the prior node. Given that 10-nm yields continue to lag historical norms (at this stage of the process transition), Intel has been continuously delaying the high-volume manufacturing implementation of its 10-nm node. On the basis of recent management commentary, it appears Intel’s PC chips will ramp over the course of 2019 for the 10-nm process, with server chips a quick follow-on in early 2020. Intel has done early-ship programs in the past to offer key customers early access to its Xeon CPUs. It did this with Google and Intel’s highly anticipated Skylake server CPUs in 2017 (before the official launch date). We think similar developments will occur in the future as Intel caters to its highly valued cloud customers with early access to its latest and greatest.

Despite its 10-nm delays, we think Intel’s data center group prospects are bright. During Intel’s Data-Centric Innovation Summit in August, management highlighted the chip titan’s numerous efforts in the data center group. Navin Shenoy, head of DCG, conveyed how Intel has moved beyond being a component supplier of CPUs to PC and server markets to become a data-centric chip titan with a broad product portfolio for data processing, storage, and transfer. Intel maintains that its data-centric markets are any non-PC space, loosely defined by the data center, nonvolatile memory, Internet of Things/advanced driver-assistance systems, and field-programmable gate arrays. In 2013, only one third of Intel’s revenue was data-centric. Today, nearly half of the company’s sales is derived from data-centric businesses that are growing in the double digits. Shenoy further noted that “90% of the world’s data has been created in the last two years….Only about 1% of that data is being utilized to create any sort of meaningful business value.” As such, we think Intel’s 2022 total addressable market opportunity of $200 billion-plus is not out of the realm of possibility, particularly as it looks to help customers monetize the vast array of data currently sitting idle.

Cloud Exposure Will Remain Strong Thanks to Custom Work One of Intel's primary vessels for targeting this total addressable market is the shift to the cloud. We think Intel's increasingly deeper and more intimate relationship with cloud service providers to help them optimize workloads will make it difficult for competitors such as AMD to capture meaningful server CPU share, at least for most of the growing cloud environment.

Major cloud vendors have created massive public clouds that provide elasticity for their own workloads as well as a customer base that subsidizes their infrastructure research and development and data center build-outs. As enterprises realize that operating on-premises IT infrastructure is inefficient, they will turn to hyperscale titans such as Amazon, Microsoft, and Alphabet (Google). Beyond traditional enterprise workloads moving to the cloud, Intel has also benefited from the market expansion created by new consumer services that leverage the cloud (Facebook, Twitter, Netflix, online gaming) as well as enterprises seeking new use cases, such as AI. Shenoy noted that offloading to the enterprise represents only one third of Intel’s current cloud business.

As a consequence of these trends, Intel has been forced to adjust its DCG strategy over the past decade. Instead of selling a limited range of chips to original-equipment manufacturers that would in turn sell completed servers to enterprises, Intel is now dealing directly with a smaller array of customers--composed of hyperscale cloud vendors like Amazon, Google, and Microsoft--to meet a wide array of performance, energy consumption, and other metrics. Customized processors are becoming the norm as these vendors juggle a diverse set of workloads for both themselves and myriad customers that rely on their cloud platforms for unique applications. Fifty percent of the Xeon server processors sold to the 10 major cloud service providers in 2017 were custom-designed for the individual customer, up from 18% in 2013.

Raejeanne Skillern, head of Intel’s cloud service provider platform group, detailed the accelerated growth Intel has enjoyed so far in cloud-related sales this year. From 2014 to 2017, cloud revenue rose at a CAGR of 30% and accounted for 23% of DCG sales in 2014. In the first half of 2018, cloud accounted for 43% of DCG revenue and had increased 43% over the first half of 2017. This level of acceleration can be predominantly attributed to the 2017 launch of the Xeon scalable server products (code-named Skylake) and probably won’t persist to this extent. However, the expansive addressable market in the cloud bodes well for DCG, in our view, led by digital retail, advertising, video and media, and broader cloud services. Skillern estimates that these four markets will reach $4.9 trillion, $400 billion, $120 billion, and $300 billion, respectively, by 2021. These opportunities fuel our assumption that cloud will rise at a CAGR of 24% over the next five years, accounting for about 60% of DCG revenue by 2022.

Each of these markets has a different set of requirements for workloads that are often at massive scale. Outside of Intel’s standard product road map, much of the optimization work that Intel does for cloud service providers is unique to that customer. This typically involves tweaking processor specifications to better suit the workload run at a company like Google, Amazon, or Facebook. However, going forward we expect Intel to be able to integrate intellectual property from the customer itself, third-party IP, and Intel IP to better cater to these needs, which are continuously diverging from general-purpose computing. In addition to the processor itself, Intel’s work in memory, storage, connectivity, silicon photonics, and accelerators leads to a platform approach in lieu of a component one.

At the summit, Shenoy brought on stage Bart Sano, vice president of platforms at Google, to illustrate to what extent Intel and Google have been collaborating. Google was the first customer to receive early shipments of the Skylake server CPUs in 2017, enabling cloud customers to deploy workloads on the architecture sooner than traditional server product refresh cycles. This culminated in Google honoring Intel as infrastructure partner of the year. Furthermore, Intel’s Optane persistent memory (a new form of memory that sits between DRAM and NAND and can better handle large data sets for database and AI applications) will soon be deployed in Google’s cloud. We expect this new class of memory to solve many bottlenecks faced by data-intensive workloads. Sano cited SAP Hana as a major database solution that will be deploying the Optane persistent memory for its customers. While Intel’s memory endeavors remain in the infancy stage, we believe the increasing integration with its computing and memory (3D NAND and Optane) products will be crucial for its DCG prospects as the line between discrete processing and memory blurs.

Further solidifying Intel’s position in the cloud is its Intel Select Solutions program, through which the company assists customers in deploying complex solutions. Intel verifies workload-optimized configurations by engineering, validating, and testing those solutions with customers at the hardware and software levels. This helps accelerate the time to market for these systems by modernizing the entire infrastructure as opposed to simply selling a Xeon CPU in isolation. We believe this program creates a virtuous cycle for Intel to add increasingly more value to its customers and simultaneously increase customer switching costs.

AI Accounted for $1 Billion in Xeon Sales in 2017 With More to Come One of the applications that has been accelerated by the shift to the cloud is the explosive artificial intelligence phenomenon. This involves collecting large swaths of data, including machines signals, audio, video, speech, and text, followed by techniques that develop algorithms to produce conclusions in the same manner as (or better than) humans. Training and inferencing are the two main components of deep learning. The former involves teaching a computer to complete a task, while inferencing shows the trained computer a brand-new instance on which to perform the task. While most of these processes occur in the data center today, we suspect the inferencing function will occur at the edge over time, particularly in smartphones, vehicles, and smart homes for applications that require lower latency.

Shenoy estimated that the AI data center market was about $2.5 billion in 2017, with Intel capturing about $1 billion (in Xeon CPUs only, not FPGAs). We assume Nvidia NVDA captured most of the remainder, marking its first-mover advantage. Although we believe Nvidia is well positioned to continue benefiting from the burgeoning AI opportunity, we think Intel also stands to gain significantly, an opinion not shared by the stock market.

Nvidia’s graphics processing units have garnered considerable attention from the market, especially in the training phase of AI. While the raw performance of non-CPU variants (GPUs, FPGAs, and application-specific integrated circuits) in training and inferencing is noteworthy, other considerations, such as the utilization, total cost of ownership, and homogeneity of data centers, must be contemplated.

The common thread that links the data centers of cloud vendors is scale. A global data center footprint requires immense capital intensity, along with a large set of fungible resources that can be allocated to a broad set of constantly evolving workloads. This prerequisite enabled general-purpose CPUs from Intel to dominate the server market, owing to the practical economics and ease of maintenance accompanied by a homogeneous infrastructure. A company such as Google could run the few applications that required special hardware for free, using the excess capacity of its large data centers. The scalability and flexibility of general-purpose CPUs have led to most of today’s inference workloads running on Xeon processors.

According to Bratin Saha, vice president of machine learning platforms at Amazon AI, “Machine learning is a big part of our heritage. It works on GPUs today, but it also works on instances powered by highly customized Intel Xeon processors.” Shenoy pointed out that Intel helped Amazon optimize its deep-learning frameworks on the highly customized Xeon processor and delivered a 7 times performance improvement. Kim Hazelwood, head of the Facebook AI Infrastructure Foundation, echoes this sentiment: “Inference is one thing we do, but we do lots more. That’s why flexibility is really essential.” Consequently, Facebook runs most of its inference workloads on Xeon processors.

Beyond traditional process technology transitions, Intel has sought innovative ways to improve the performance of its Xeon chips for AI. Albeit off a very low base, Intel’s Xeon Scalable chips, launched in July, improved inference and training performance by 277 and 240 times, respectively, from the 2014 Haswell-based Xeon processor. These optimizations include software and framework optimizations as well as new features such as AVX-512. Intel AVX-512 is a set of new CPU instructions used to accelerate performance for workloads related to scientific simulations, financial analytics, deep learning, 3D modeling and analysis, image/video processing, and data compression. This feature enables greater throughput for operations performed by the processor, meaning twice the number of data elements can be processed relative to AVX-512’s predecessor (AVX). Since the Xeon Scalable launch in July, the company has improved inference performance 5.4 times, primarily through software optimizations.

Intel’s next-generation Xeon chip, Cascade Lake, will ship in the fourth quarter. Though it will be once again manufactured on Intel’s 14-nm process, new features are expected to improve the inference performance by 11 times from last July. These include Deep Learning Boost, a new integrated memory controller for faster data transfer to and from the processor, in addition to common improvements (higher frequencies, new instructions, optimized caching). Deep Learning Boost extends upon AVX-512, handling certain AI-related tasks with fewer instructions, thereby boosting the performance. The key takeaway, in our view, is that there are many enhancements and levers for Intel to manipulate to satiate the voracious appetite of cloud customers for greater performance, especially in AI, that go beyond simple process node transitions.

As AI workloads evolve, we foresee a diverse set of applications that span the data center all the way to the edge. One feather in Intel’s cap is its broad product portfolio for all performance and energy consumption requirements. In contrast to Nvidia, which boasts expertise solely in GPUs, Intel has augmented its core CPU business with FPGAs and ASICs mainly by acquiring Altera, Mobileye, Nervana, and Movidius. For example, in the data center, flexibility to run different workloads is prioritized. In contrast, power constraints are more likely to be critical at the edge in IoT-related devices.

The company is also working to build the software stack to marry each solution to work together--a tall order, but one we think Intel is on the right track to achieve. Naveen Rao, vice president of Intel’s artificial intelligence products group, explained how about half of his group’s resources is dedicated to software development. The company’s nGraph project will standardize the building blocks of neural networks to translate them to optimized libraries or code for the diverse set of hardware solutions. Today, nGraph is connected from the popular TensorFlow framework to Xeon, making it easier for high-end software developers to benefit from low-level software improvements created by Intel engineers. Meanwhile, Intel’s OpenVINO is an open-source kit that allows computer vision to be optimized on different processors from Intel (including its Movidius or FPGA offerings).

Rao also walked through the development cycle for deploying an AI solution. The main takeaway, in our view, was that the training, or model parameter fitting, accounted for only 30% of the process, with the remainder currently running predominantly on Intel Xeon CPUs. Rao went on to summarize the strides Intel has made in optimizing the software for its Xeon products to better handle AI-related workloads, with the aforementioned AVX-512, DL Boost, and other upcoming enhancements.

Rao was fairly tight-lipped on Intel’s efforts to develop ASICs for AI applications, though the company will be launching its own variant in 2019. Dedicated silicon becomes inherently more expensive and takes longer to design and make in high volume, which creates the risk that the chip will be outdated upon arrival. To justify the significant up-front costs of a cutting-edge ASIC, a significant boost in performance is necessary. Google offers the highest-profile example of a successful ASIC, with its tensor processing unit that accelerates its inferencing workloads in its data centers. Newer versions of the TPU can also run training workloads. As these algorithms mature and become more mainstream, ASICs will carve out a more meaningful share of the AI chip market.

Intel's Valuation Compelling Our fair value estimate for Intel is $65 per share. As the PC market continues to decline, we see server processors supplanting sales in PC processors, ultimately leading to overall revenue growth in the midsingle digits through 2022.

In the near term, we expect Intel’s PC-derived revenue to decline in the low single digits. However, the proliferation of cloud computing and burgeoning Big Data and artificial intelligence trends should provide tailwinds for the data center group, which we see growing at an 11% CAGR through 2022. By then, we believe the PC and data center groups will converge in percentage of total revenue, accounting for roughly 40% each. The company’s auxiliary businesses (Internet of Things, nonvolatile memory, programmable solutions, and automotive) will also drive growth, though these subsegments remain a small portion of total revenue at this juncture.

Beyond our explicit five-year horizon, we foresee the automotive segment spearheading revenue growth. Mobileye’s incumbency in countless advanced driver-assistance systems and robust pipeline of design wins, coupled with Intel’s technological and financial resources, gives us confidence that Intel will be a formidable player in the race to self-driving cars. We estimate a $7 billion 2025 opportunity for Intel’s and Nvidia’s autonomous platform solutions and estimate that both entities will capture meaningful portions of this opportunity ($3.7 billion for Intel and $3.3 billion for Nvidia in self-driving platform revenue).

Intel’s lead in process technology benefits from sizable R&D outlay (21% of revenue on average in recent years), which is critical to the company’s ability to sustain its advantage. Going forward, we believe increasing unit sales of server chips, which as a segment has above-corporate-average gross margins, will partially offset greater costs associated with cutting-edge process technologies. The company’s foray into 3D NAND manufacturing to support its SSD business for servers, however, will depress gross margins. Consequently, we see gross margins tracking around 62% over the next few years. Nonetheless, we think the company can drive operating leverage with more-focused research and development spending toward data center and automotive end markets, while shifting resources away from the declining PC space, leading to operating margins in the low 30s.

More in Stocks

About the Author

Abhinav Davuluri

Strategist
More from Author

Abhinav Davuluri, CFA, is a strategist for Morningstar Research Services LLC, a wholly owned subsidiary of Morningstar, Inc. He covers microprocessors, wafer manufacturing equipment, and other companies in the semiconductor space.

Before joining Morningstar in 2015, Davuluri spent two years as a process engineer for Intel.

Davuluri holds a bachelor’s degree in chemical engineering from the University of Michigan. He also holds the Chartered Financial Analyst® designation.

Sponsor Center