Intel's five nodes in four years plan is extremely agressive and ambitious. They are trying to pull off something that I believe is not feasible. Lots of moving parts in parallel.
I worked in a Fab for a year and the complexity is mind blowing. I don't see how they can execute to build those nodes and get the yields under control in such a short timeframe.
Yes. I think they can get the node out with a little bit of delay. But I don't think they'll be very profitable for a while.
However, they do have good timing because the world is about to enter a period where chip designers will be desperate to get new AI chips manufactured.
There is only one cutting edge fab which is TSMC. Chip designers want a second supplier desperately to create some competition.
I think 20A/18A will only be used by products customer can buy in mid-2025 - not late 2024 like they said.
You can see here that Intel has already delayed everything by 1-2 quarters from their 2022 roadmap:
There is an AI chip frenzy for sure. However consumers aren't buying those - data centers are - those owned by Microsoft, Google, Facebook, Tesla and Apple. I therefore question the sustainability.
What happens if AI is a bubble and those companies don't get an ROI? What if they can't find a way to monetize AI because consumers just see it as a commodity? What if after ramp up AI chip purchase dies down because models are already trained and inference doesn't need that much compute?
AI is going to drive huge demand for new silicon for inference mostly but also for training. Every device needs to have new processors that are optimized for running ML. For example, when the new iPhones integrate LLMs directly onto iPhones, you'll need to keep buying new iPhones because Apple will drastically increase the size of Neural Engine every generation.
AI is going to cause a huge uplift in chip demand from clouds to small devices.
Hmm. For iPhones and Android, Apple and Samsung already have that covered. Their AI chips already exist and production must already be planned for the coming years. NVIDIA isn't going to contribute.
I heard predictions for increase in AI PC sales but I don't buy it. That remains to be proven.
So we are left with data centers. Isn't that right?
They need to drastically increase the size of the NPU every year to keep up. We are a long way away from running GPT4 on an iPhone.
Every chip will eventually need to be replaced with one that can do inference for a massive AI model. And we will need more chips to put in more places.
I worked in a Fab for a year and the complexity is mind blowing. I don't see how they can execute to build those nodes and get the yields under control in such a short timeframe.
Best of luck to them.