# Someone explain about PSU amps..



## Enigma8750

Your statement reminds me of when I was installing car stereos for my buddies in the eighties. They used to buy these 120 watt RMS amps. That was usually the biggest piece of trash in the heap. Now I saw a friends Macintosh Audio Amp and asked him what wattage it was. He began to tell me that he paid over 1500.00 for this amp and it was 20 watts.
I thought the man was crazy. and then he turned it up to 3 watts and my ears was about to bleed.
Its all sales crap.


----------



## Show4Pro

If you were measuring the resistance of the components in your PC with a multimeter, then you're completely wrong. A GPU can actually use up to 200-300 watts of power which corresponds to 15-25AMPS, not miliamps of current. It is part of the reason why it gets so damn hot. The actual resistance of the power input will change when it's operating. If you don't believe me, cut the 12 volt line to your video card and splice in your multimeter for a current measurement, I guaranty that it is not going to be in the mili amp range. So when the power supply company rates their power supply to be able to handle a 20AMP current draw on 12 volts, they don't mean 20 miliamp, that would be less power than a friggin watch battery.


----------



## DuckieHo

Quote:


Originally Posted by *Artas1984* 
I realy don't get all the BS about recommended "amps" written on power blocks, and that is because it makes no sence.

The thing is that the real ampers on the standart DC 12 V, 5 V, 3,3 V rails are realy low, because of the very high resistance of the PSU. Inside devices like VGA or CPU, it gets even biger. For example a GPU needs a 1,4 V input, before the current reaches the GPU of VGA, powerful resistors block the way with resistance of something like 600 ohms or more.. That oviously means that the amps on that current which leads to the core of GPU, are very low ~ 1,4/600 A. So *** is this BS with 20+ A, when the real ampers on the rails based on the resistance of the PSU are MILIAMPERS!!

Somebody better explain this, because it pisses me of..

What i want to say is, for example it's written on my PSU "22 A" on the DC output.
Ye right.. If it had been 22 A, the PSU would be fried! Based on logics, it's more like 22 mA..
So once again, WTH?

Amperes are amperes. There should not be any BS regarding them if you know how to read a label.

Ampere rating has nothing to do with resistance. Also, PSU wire resistance is low and like any other wire resistance.

No video card ever needs a 1.4v input. However, the GPU themselves do use voltages around 1.4v.

I think you are missing the fact that there are components which stepdown the voltage.

Here's how things work (simplified):
Ohm Law: Power(w) = Voltage(v) * current(A)

PSU is capable of supplying [email protected] (216w) to video card. Some video card needs 100w to run. Therefore, it needs to draw 8.3A (100w/12v) from the PSU. With wire resistance and voltage fluctuations, the video card gets [email protected] (good enough). The video card has onboard circuitry than converts the power to what is needed. For example, it takes the [email protected] (106.2w) and converts it to [email protected] (20w) to power the memory. Then it takes the remaining [email protected] and converts it to [email protected] to power the GPU.

Another example... your CPU. It draws power from only the +12v rail from your PSU. However, it needs the voltage converted to +1.3v. The voltage conversion is done by the motherboard's PWM.

The reason to do this converting is to simplify design and improve efficiency. Imagine a PSU with 10 different voltage outputs.... which have to be adjusted anyways for each CPU.

Read this article for more info on the history of PSUs: http://www.playtool.com/pages/psurailhistory/rails.html

Quote:

As technology improved, the transistors in the chips continued to shrink and they needed to run off of voltages lower than 3.3. It just wasn't practical to continue to run all the chips directly off of a voltage provided by the PSU because they would have to add more and more lower voltage rails as time passed. On top of that, you had to deal with CPUs which needed different voltages depending on which CPU was plugged into the motherboard. They temporarily avoided the problem by providing motherboard voltage regulators which dropped 5 or 3.3 volts down to a lower voltage by discarding the extra voltage as heat. As power requirements increased, that solution quickly became impractical.

That's when PC's power distribution fundamentally changed. The older PCs powered their chips by connecting them directly to voltage rails provided by the PSU. But the newer PCs started putting DC/DC converters onto the motherboard which took a voltage provided by the PSU and efficiently converted it into the lower voltage needed by the chips. Many of the early DC/DC converters converted 5 volts into the lower voltage. Presumably this was because the power supplies of the time delivered most of their power on 5 volts. But converting 12 volts instead of 5 volts makes the wiring much simpler because a higher voltage delivers the same amount of power by using a smaller current. Smaller current lets you use less wires and connectors to deliver the same power. Power distribution is much easier at higher voltages. The highest voltage provided by a PC PSU is 12 volts so that became the most common input voltage used by the biggest DC/DC converters. A modern CPU has its own converter on the motherboard which converts 12 volts to whatever voltage the CPU requires. Modern video cards also have their own converters on the card which convert 12 volts into the desired voltages. The CPU and video card tend to be the biggest consumers of power, when fully loaded, so a modern PSU has to provide most of its power at 12 volts. So in the old days you had a bunch of chips directly connected to 3.3 or 5 volts and that's where a PSU provided most of its wattage. But in a new computer the PSU provides most of its power at 12 volts and then various DC/DC converters throughout the computer convert it to whatever voltage is needed by that particular set of chips. The table below is a more modern 480 watt PSU. The maximum power available on 3.3 and 5 volts has increased a little but the bulk of the expanded wattage is provided on the 12 volt rail.


----------



## Asus Mobile

Thanks Duckie, even your simple explanation took a little thought to wrap my head around. I got stuck thinking amps being a volume was more fixed than volts, I did get past after some thought. I knew about increasing voltage because of "stun guns". As long as you are here could you tell us what/how/component increases voltage, and what lowers voltage but increases amps? As a non electrical person just want to know and well Wiki not easily researchable when one does not have knowledge. I know a little off topic but is exactly a part of the question. Thanks in advance.


----------



## Artas1984

Quote:


Originally Posted by *DuckieHo* 
Ohm Law: Power(w) = Voltage(v) * current(A)

Looks like i forgot that simple thing! And that is perhaps because of the fact, that in our world where the voltage is 220 V, and simple electric bulb needs 100 W, so that would be less than 0,5 A, and to think that such pitty devices inside a PC need a current of 20+ A.. Man.

But it's BS anyway, if HD2900XT draws about 200 W on load, it does not need more than 20 A on the 12 V rail, why would then a poweful PSU be recommended anyway?
I think it's BS advertising...


----------



## DuckieHo

Quote:


Originally Posted by *Asus Mobile* 
Thanks Duckie, even your simple explanation took a little thought to wrap my head around. I got stuck thinking amps being a volume was more fixed than volts, I did get past after some thought. I knew about increasing voltage because of "stun guns". As long as you are here could you tell us what/how/component increases voltage, and what lowers voltage but increases amps? As a non electrical person just want to know and well Wiki not easily researchable when one does not have knowledge. I know a little off topic but is exactly a part of the question. Thanks in advance.

Just remember about converservation of energy and Ohm's Law.

There are a few electical circuits that change change voltage.
http://en.wikipedia.org/wiki/Voltage_regulator_module
http://en.wikipedia.org/wiki/Buck_converter
http://en.wikipedia.org/wiki/Transformer
http://en.wikipedia.org/wiki/Voltage_divider

Quote:


Originally Posted by *Artas1984* 
Looks like i forgot that simple thing! And that is perhaps because of the fact, that in our world where the voltage is 220 V, and simple electric bulb needs 100 W, so that would be less than 0,5 A, and to think that such pitty devices inside a PC need a current of 20+ A.. Man.

But it's BS anyway, if HD2900XT draws about 200 W on load, it does not need more than 20 A on the 12 V rail, why would then a poweful PSU be recommended anyway?
I think it's BS advertising...

The card itself needs 200w... but what about all the other components? CPUs need 50-200w. HDs need 20w startup. Chipsets needs 10-60w.

In addition, you generally don't want to run your PSU at 100% load. It should run at 40-90%. Furthermore, many people want some leeway for future upgrades. Even more.... cheaper PSUs exaggerate their specs.


----------



## grishkathefool

I am wondering about the implication of Ohm's law on chip heat. If you lower the voltage then the chip will want more amperes to meet it's wattage.

So, if a 65w chip normally runs at 1.2v then it draws 54.5 amps, if you lower that to 1.152, for instance, then won't it want to draw 56.4 amps? If it does, does it really matter?

On a nanoscopic scale, 2 amps is a big deal, I think? I don't really know, although I am an Electrician, I am not an Engineer.

Keeping in mind that an amp is the amount of current needed to raise on cubic centimeter of water one degree Celsius.... and we are talking about a sale much tinier than a cm3.

So, what I am getting at, is this, how come when I lower my voltage, my Vcore temp drops? Shouldn't it rise?

BTW, the converse is true, too. If you raise volts, then amps diminish, thus you have less heat... when you do this to your vCore, the temp rises...

are there any Engineers that can illuminate this for me?


----------



## Ionimplant

Hello Grishkathefool,

You ask a good question. There are more variables than what you have described. I will try to simplify the answer but, even simplified, it is quite involved.

Assume the following:
-The chip is CMOS and there is zero (0) rush-through current. That means that all current is due to charging and discharging capacitance at every internal node of the chip. This is called "displacement current." This is a pretty good assumption.

-There are no other "continuous" current paths. Examples might be current in on-board regulators or PLLs--analog stuff.

-Every internal node is toggling at the same frequency

Given these assumptions then the average current supplied to the chip is:
I = C*V*F
where:
C = total capacitance on the chip (summed up the capacitance at the output of every gate on the chip)
V = supply voltage
F = frequency at which all of the gates are toggling.

This last term, F, is a real SWAG because not every gate is switching at the onboard clock frequency. Often times, we de-rate this by some factor for a better approximation, but the real number is even very hard to arrive at by simulation (given the many variables).

Regardless, the CVF equation provides very intuitive insight into the question you ask.

If, in your question, C is constant and F is constant, then as you reduce V, the current will go down.

You made a blunder when you decided to keep "wattage" constant. Wattage is the "dependent variable" (to use algebraic terminology!) It is not constant. Power is C*V^2*F which is derived from P = I*V (and substitute for I). Wattage is NOT constant. So when V goes down, I goes down and as a result, P goes down.

When P goes down, temperature goes down!

Voila!

Understanding this simple formula helps to understand why semiconductors are pushed to finer and finer linewidths (130nm -> 90nm -> 65nm -> on and on). As the geometry shrinks, the C goes down. As C goes down (holding F constant) allows for I to go down and thus overall power to go down. Now if a given package technology can dissipate XX Watts of power then we can increase F as C goes down, holding the power constant but getting greater speed. Reducing linewidth requires a reduction in V which slows things down a bit.

This is an oversimplification but it is useful for a qualitative understanding.

Regarding your understanding of an "amp." I think you crossed some wires with the definition of "calorie" although you are not quite right on that either. An Ampere is 1 coulomb per second flowing in a conductor. A coulomb is a measure of charge. An electron has 1.602 * 10^-19 coulombs of charge.

I hope this helps!!


----------



## grishkathefool

Thanks, that did help. Unlike the Big world, where watts are constant for the function of a device, a CPU has its own set of rules.

I thought that a coulomb was 6.02...x 10^18 electrons... ie, a unit of quantity, thus making an amp a unit of flow. (on a side note, I always thought is was interesting the quantitative similarity between a coulomb and a mole.

Again, thanks for the explanation, though, of the mystery of heat and voltage on my CPU...


----------



## Ionimplant

Since q(e) = 1.602x10^-19 colombs/electron
then flip it over to get electrons/coulomb >> 1/(q(e) = 6.24 x 10^18 electrons


----------



## grishkathefool

hence the reason that I am an Electrician and not an Engineer.


----------

