top of page
Writer's pictureChris Cole

The ugly truth behind Intel's woes

Updated: Aug 11, 2020

Since its Q2 earnings call a few weeks ago, Intel Corporation (INTC) shares have plummeted 20% upon announcement of problems with its next-generation 10nm and 7nm manufacturing processes. The massive collapse has led to widespread attention among investors, but in reality the situation has been years in the making for those who’ve been paying attention. Today I’d like to look at some of the technical decisions Intel made, why they’ve caused problems and the implications of that on their future.


 

Lithography techniques


Lithography is an incredibly complicated process that forms an incredible competitive advantage for those who master it. In simple terms, you put a template of circuit designs (photomask) on a silicon base (wafer), shine a powerful laser on it [1]and the result is similar to this…


Over time, people tried to fit more transistors in the same area – this would lead to increased performance capability, lower power consumption and various other benefits outlined in Dennard Scaling[2]. This becomes progressively more difficult over time, as you’re trying to cram transistors into areas thousands of times smaller than the width of a hair. The industry ran into a particularly tricky wall around the 20nm mark, since the size of the laser you used to ‘print’ the circuit design became so relatively big that it couldn’t reliably follow the complicated patterns needed for all the transistors. Two schools of thought developed to address this problem – multi patterning (using more than one photomask, each with simpler diagrams, and lasering the wafer with each of these templates separately), and EUV (extreme ultra-violet, using radiation with much smaller wavelengths than traditional). Intel saw success with dual-patterning (two templates) on its 22 and 14nm process, and chose to go one step further and pursue quad-patterning on its 10nm process.[3] Meanwhile, its competitors TSMC and Samsung chose EUV. [4] For reference, Intel themselves have also chosen to pursue EUV for their 7nm process. That might give you a hint as to which was the right choice…

Other terminology I’ll be referring to in this piece are yield (how much of a wafer is actually useable) and monolithic (the whole CPU is cut out of the wafer as a single piece of silicon) vs chiplets (the CPU is formed from several pieces of silicon stuck together)

 

The problems with 10nm

Back in 2013, Intel was in its prime. It dominated the CPU market with >90% market share, and was pursuing a tick-tock strategy with its chips – every two years you would have a die shrink ‘tick’, then the alternating years you would have a microarchitecture change ‘tock’. In the roadmaps released by Intel, they planned to have their next ‘tick’ of 10nm in 2016. The ‘tock’ – Skylake architecture came, but the ‘tick’ never did. Even today, 4 years after it was supposed to be released, 10nm still isn’t really here. On paper, it was launched with Cannon Lake in 2018 – but the total number of those are in the thousands, if not hundreds. On paper, the ‘mass-market’ generation Ice Lake launched in 2020 but they have incredibly limited supply and offer inferior performance to Intel’s own 14nm offerings. [5] The latest update is that desktop and datacentre chips will come in the second half of 2021 – but for reasons we shall soon see it is my opinion that these will yet again be flops. In fact, it is my opinion that 10nm is a total writeoff, and that the design decisions taken at a very early stage have doomed it to failure. When you use lithographic techniques, you are bound to have some defects in your wafer. After all, creating billions of devices tens of atoms in size isn’t going to be perfect. Multi patterning as a lithographic technique inherently has a higher defect rate than not using it – you’re basically going through the same process multiple times, thus increasing the chance of defect dramatically. As I mentioned earlier, Intel is using quad patterning in 10nm – this means their defect rates are going to be sky high. At the same time, their usage of a monolithic die compounds this problem for high-performance, high core count CPU models. As you can see from the blue wafer below, it’s difficult to draw large squares (high-core count models) that are without defect. In comparison, the red wafer is AMD’s chiplet approach, built on TSMC’s less defect-prone EUV process.




Since you can paste together multiple small CPUs into one bigger one, you use a far greater percentage of the wafer, cutting costs and letting you freely choose however many high-performance chips you want to build.

Of course, it’s impossible for anyone outside Intel to know the exact numbers for the defect rates, yields and unit costs for 10nm. No doubt they are improving as time goes on,as they always do with a maturing architecture. However, I can say with certainty that

1) they are currently not yielding at rates that could let them release high core-count server chips in any volume, EVEN AT A LOSS

2) The margins on 10nm will NEVER reach the heights that Intel has traditionally seen. Intel has enjoyed gross margins of above 60% for the last decade. In my opinion, if Intel were to replace their whole product stack with 10nm, their gross margin will never rise above 30%. The maximum price they can release their products at is capped not only by AMD’s offerings, but more importantly their own legacy performance. If Intel attempted to price at a level that would give them healthy margins, their entire product lineup would be outcompeted by their 5 year old 14nm chips on a price/performance basis, and their customers would have no reason to upgrade, decimating their revenues.


These are bold statements but I believe Intel’s actions over the past few years, and their planned actions over the next few, support this view.

When you release a new generation of processors, you always want to have it be ‘better’ than the previous generation. This may seem incredibly obvious, but the only exception is when the design has such big inherent flaws that you can’t physically do so. For instance, the Bulldozer architecture AMD released in 2011 performed worse than their own previous-generation Phenom II architecture [6], leading to near-bankruptcy of the company, due to the flawed design of maximising core counts from a belief that multi-threaded performance was the future; while having the processor cores shares caches and FPUs, massively reducing the multi-threaded performance of the architecture. Intel finds themselves in a similar situation today. Their design choices made back in 2013 mean that it is impossible to mass produce 10nm high core count chips. This would’ve been fine if their monopoly continued and the mainstream continued to have 4 core, 8 threaded CPUs. Indeed, they are producing Ice Lake laptop CPUs today that have 4 cores. However, the resurgence of AMD with their high core count capable Zen architecture meant that Intel were forced into raising their own core counts to compete – there has been a doubling of core counts across their entire product stack, which is fine on 14nm with its double patterning, but not so much on 10nm. The limitations of 10nm mean that current generation chips at the same price point from Intel have 14nm massively outperforming 10nm, with the higher core counts outweighing any density improvements that 10nm brings. Similarly, leaks for the upcoming 10nm Alder Lake desktop and Ice Lake Xeon chips suggest that the maximum number of cores on 10nm,28, will be 33-50% lower than those from 14nm [7] – not to mention AMD’s offerings which top out at 2.3x the core count at half the price.[8] The persistent lack of chips on 10nm that can outperform their predecessors, despite us now technically being on ‘10nm+++’, suggests that there is a fundamental barrier in the technology that no amount of delays and extra engineering can get past. 10nm is rotten from the very first steps taken.


 

7nm and beyond

So now we’ve established just how much of a disaster Intel’s 10nm process is, what about 7nm? It should be better right? After all, its built on the superior EUV, rather than SAQP. The market obviously expects it to be Intel’s saviour, given the massive drop in Intel share price was widely attributed to the ‘6 month delay’ in 7nm rollout. While I don’t have nearly as much solid information to go on compared to 10nm, I just want to note a few things. The exact words Bob Swan used in the Q2 call were ‘we are seeing a 6 month shift in 7nm… 12 months behind our internal target… we have identified a defect mode that resulted in yield degradation’.

There’s quite a lot to break down here. Many people, including analysts on the call, were confused by how 7nm could be both 6 and 12 months behind target at the same time. Have Intel achieved quantum tunnelling of time? The truth is that Bob’s claim of a ‘buffer in planning process’ as the reason, while technically true, is incredibly misleading. In any typical launch of a new process node, you spend a few months getting up to speed – running the foundry through the whole process, troubleshooting, using the produced chips as prototypes to send to OEM partners for them to design products around, etc. You don’t sell the chips produced to anyone. Industry standard is to call this period a tape-out, not a launch of a new process – that’s when you actually produce chips that you sell to people. Bob’s comment translated is that the process is delayed by 12 months, but they’re going to breach industry standard and ‘launch’ 7nm when the first fabs start spinning up 6 months before they have chips in any volume. Sound ridiculous? Well, Intel did the exact same thing with 10nm. Faced with mounting pressure over the constant delays, Intel ‘launched’ Cannon Lake in May 2018. There was 1 CPU in the whole generation, a dual core processor with a clock speed of 2.2Ghz that was slower than the i3-3250 released in 2013 for $20 less than the 10nm part. Not to mention it was nigh on impossible to actually buy one.[9] Cannon Lake was an incredibly obvious paper launch, released to appease investors at a time where Intel had just started up its fabs. Ice Lake, the first 10nm architecture you could actually buy (in limited quantities) shipped in September 2019, more than a year after Cannon Lake ‘launched’. This ‘6-month’ delay is nothing more than an attempt to sweetcoat a 12 month delay (assuming no further delays).


The second part of the comment, relating to a ‘defect mode’, is just as interesting as the first. Intel are attempting to use GaaFeT technology for their 7nm process, though there's conflicting information suggesting they might move away from this if it proves to be too difficult. [10] GaaFet, or Gates-all-around-Field-effect-Transistor, is a new and unproven transistor technology that should overcome the technical difficulties current transistor technologies face at increasingly smaller sizes. Unlike normal process shrinks, this is going to a completely new type of transistor and we only have one other comparable in history – the transistor to a 3D FinFeT technology a few years ago. With FinFet, the research process from having a ‘working prototype’ demonstrating commercialisation potential took 8 years. [11] Meanwhile, the equivalent demonstration with GaaFeT took place 3 years ago.

[12] While FinFeT and GaaFeT are different beasts, it is undeniable that the plans from Intel, and indeed all other foundries, are incredibly ambitious. The latest leaks suggest that the ‘defect mode’ Intel have ran into has to do with their GaaFeT implementation. If this is true, you could easily see 7nm being just as much of a disaster as 10nm is.


Beyond 7nm, there are some positives to be found. As we get even smaller transistors, it will be necessary for both EUV and multi patterning to occur. It's likely that Intel will have an advantage in this area compared to competitors due to their experience with 10nm. At the same time, they are actively exploring chipletbased designs. They might have been late in realising the benefits, but they've finally come around with their EMIB, Foveros and big.Little technologies, all of which I'll explore in a future blog post.

 

Conclusion

I’ll leave it to you to decide what the financial implications of these deductions are for Intel, but suffice it to say the baseline scenario is far worse than what many people envision. There is no doubt that Intel will recover from this fiasco, but at what cost? Will it require yet another management reshuffle? Following in the footsteps of AMD, outsourcing production fully and writing off its own fabs? Acknowledgement that they will no longer be able to extract incredible margins from their monopolistic position?




 


References


1,642 views0 comments

Recent Posts

See All

Comments


bottom of page