Core i7 Power Plays Print E-mail
Written by Michael Schuette   
Dec 04, 2008 at 07:36 PM

In our initial article about the Core i7, codename Nehalem, we were stunned by the power efficiency of Intel’s new CPU, particularly, as we stated, since the measured processor power consumption also comprised that of the memory controller – a saving on the system level of somewhere in the order of 15-20W under load. In the course of numerous discussions, it became obvious that the numbers we measured did not quite add up to the thermal load. After the embargo on the Core i7 was lifted, data sheets became available proving our assumptions wrong, in that the memory controller was NOT part of the power we measured through the VRMs. At the same time, CanardPC and several other websites like HardTecs4U posted additional information regarding the overall power configuration of the Nehalem CPU, which is somewhat different from what we have come to know in the past from CPUs offered by Intel or AMD.

CPU Power Primer

As a primer, a quick recap how modern CPUs are usually powered: With the Pentium processor, Intel introduced the split-voltage design, that is a separation between core and I/O voltage (Vre and Vi/o) that were supplied through the AT or ATX power connector on the motherboard, using 3.3V, 5V or 12V as the initial voltage. Higher CPU core frequencies and transistor count necessarily increased the power consumption and thermal dissipation of the CPUs, and, created a need for additional power. The actual power delivered to the CPU can be calculated as the product of Volts * Amperes, with the latter being limited by the gauge of the wiring and the Molex connector, therefore, it was a logical choice for a transition to 12V as the main supply source. Starting with the Pentium4, the CPU core power supply was uncoupled from the rest of the motherboard power circuitry and relegated to a dedicated 4-pin auxiliary power connector that was added to the ATX 2.03 specifications. With two 12V wires, each carrying 5A max load, this auxiliary power circuitry could support CPUs up to 120W going into the voltage regulator module feeding the CPU, which - at an estimated efficiency of roughly 75% - limited the actual CPU power to some 90W.

TDP: From Design Guide to Marketing Hype

The actual power draw of the CPU became a design specification called TDP and depending on whose numbers were posted, it stood for typical design power or thermal design power. Semantics aside, in the single core processor environment dominating at the time the TDP was usually considered the absolute maximum power consumption that any CPU could face under worst case scenario conditions. Suffice it to say that in an overhwelming amount of cases, it was not possible to even get close to these numbers using commercially available software. In short, the two reasons why a TDP rating was created in the first place were the tendencies to cut corners on the motherboard (remember those dreadful single-phase VRMs used by MSI on a number of boards?) as well as with respect to OEM heatsink solutions. In other words, as soon as there was a standard, the entire infrastructure could be tested and approved against this standard, with the major benefit of improved motherboard and heatsink designs in the PC space.

The big change came with the increased awareness of global warming. Suddenly, what was originally conceived to force third party manufacturers to have some headroom in their design became a negative attribute. In short, the standard grain of wisdom did not differentiate between a maximum power consumption under worst case conditions and the typical power consumption. Hence a CPU that was labeled with a high TDP , many times to force the enthusiast segment motherboard manufacturer to have the overhead necessary for even some insane overclocking, was labeled as a power hog, even if under normal operating conditions a power consumption even close to the TDP could never be reached. At this point, TDP became a marketing tool; the lower the better.

A Lack of Industry Standard

We need to consider a few things in this context: First, because of the historical legacy, there is no industry standard how TDP should be defined or measured; Intel and AMD have used completely different approaches with varying overhead. Each company used to have their own secret recipe in the form of specialized software to cause maximum load beyond anything that could be reached using commercially available software, to arrive at maximum thermal dissipation requirements. Please pay attention to the term “commercially available”.

Inclusion and (non-)Exclusion

Second, aside from the different load scenarios reachable, the next question is what should be included in the TDP. The short answer is: every bit of circuitry that runs on the CPU, including the execution units of the core(s), the caches, the clocks or phase lock loops (PLL), the delay lock loops (DLL) and the bus interface including the I/O section. Each component draws power and dissipates heat and, therefore, all of them need to be taken into consideration for thermal load and power density / power consumption.

Last Updated ( Jan 23, 2009 at 02:59 AM )
<Previous Article   Next Article>