badge



We have 28 guests online

Login Form

Username

Password

Remember me
Password Reminder

Intel's Lynnfield: More Than a CPU Print E-mail
Written by Michael Schuette   
Aug 29, 2009 at 09:00 PM


One thing generally totally underappreciated by the press and analyst community is the effort that goes into the release of a new product, and I am talking about “new product” here, not just a re-spin or re-badge of an existing product. As long as we can talk bad about it …

Seriously, it is hard to imagine the amount of work that goes into a new silicon; man hours are in the hundreds of thousands, even if existing libraries of design modules are included, and when the first wafers come off the line, there is that brief moment of anxiety whether everything is working, followed by about 5 minutes of celebration. And then it’s back to business as usual. This is just something that needed to be said, a small tribute to everybody (including Dan) who has been involved in getting modern CPUs to where they are now.

Today, it is Intel’s turn to come out with something new and exciting, a brand new family of CPU called Lynnfield, bringing a completely new system architecture to the table and hopefully doing away once and forever with the legacy of the front side bus, even if this term has been a misnomer for over 10 years now.

The AGTL Host Bus a.k.a. FrontSide Bus Legacy

Let’s start with this "host" bus because it is the key for understanding the improvements setting apart Lynnfield from its predecessors. With the Pentium II, Intel introduced the FSB, a bi-directional 64-bit bus to interconnect with the system’s core logic a.k.a. chipset. Over the years, this bus has evolved to run at quad-data rate to cope with the CPU’s need for data from the system memory. Quad data rate or not, the primary limitation of the FSB is the fact that it can only transfer data in one direction at the time, which generates a major bottleneck for any data access from the system memory since the memory controller is part of the NorthBridge. The traffic collisions come into play particulary badly if the cache is dirty and data have to be rewritten from the CPU to the system memory before any subsequent access can be allowed. Other problems particularly with high speed peripherals are caused by snoop delays since the bus master has to snoop the CPU caches first before being able to execute a direct memory access (DMA).

Reprise: Intel's X58 (Nehalem) Platform

AMD with their HyperTransport were the first to commercialize a full duplex interface with the system and also to integrate the memory controller on the CPU. Intel followed suit with their Nehalem architecture. Since this article covers Intel's new CPU, we are disregarding AMD's architecture for a moment and concentrate on the X58 platform and Nehalem as a primer for things to come.

The two important details of the above picture are the triple channel memory interface derived off the "uncore" as part of the CPU and the quick path interconnect (QPI) bus as the system interface connect. The obvious question has to be why Intel even bothered with a QPI interface to branch out to the PCIe infrastructure only after hitting a second logic building block, i.e. the X58 IO Hub. A more efficient solution would encompass integration of the highest bandwidth (PCIe x16 or x8) directly into the uncore, and then simplify the platform to support low bandwidth peripherals over a DMI interface between the uncore and the rest of the system.



Last Updated ( Mar 14, 2010 at 03:01 AM )
<Previous Article   Next Article>