Don't listen to Vincent, your cpu will always be waiting no matter how much you spend.
What? That's not what I said and it's not true. It's just that a little slower model plus the saved money one can spend on faster RAM can mean faster performance with unrepetative OS tasks (for the rest there is caching meganisms all over the place).
16 GB is also a waste.
90's: "Sir... A ten MEGABYTE harddrive! You'll never fill it up in a lifetime!" and also software caching.
Credentials: I upgrade and overclock biannually
I do technical computing (embedded systems) and learn shit like CPU design, OS architecture and assembly stuff. LOL. I know what I'm talking about... Besides: benchmarks are fuck all when it comes to the overal complexity of modern OS design and there's a lot more to a responsive system and overal performance than just linear (albeit parallel) tasks.
PS: So not to come over as arrogant, I'll give you an example: Battlefield 3. That shit's extremely GPU intensive, right? Wrong! Although GPU's these days are faster than console when you program them directly (almost avoiding drivers), there are like ten layers of abstraction to plow through. So how does that work? Remeber Rage for example. It had shitty performance right? Wrong! Graphics library drivers (like OpenGL and DirectX) are so shitty programmed that per-pixel texture lookup actualy does *Load entire tile->decompress->loockup pixel (not in a blitting way)->return pixel value->process*. That's all across the board.
So how do modern GPU API drivers work? Well there is only one single driver out there that realy talks to the GPU. It's a intermediate representation layer that consists of an abstract descriptive language that the CPU is constantly JIT compiling and sending to the actual GPU. All other drivers like CUDA, OpenCL, OpenGL, OpenGL ES, Direct3D (and so forth), shit out API to IR (Intermediate Repressentation) abstraction language that is then JIT compiled by the CPU.
Guess how much this hurts when CPU caching can't keep up? Guess where the performance is determined by? RAM mostly. But we're not out of the woods yet.
The memmory management of the GPU is done by the CPU queing and memmory management driver (which is why there are more than one GPU driver in the Windows NT and Gallium3D (Linux, mostly) architecture. This is all, too, handled by the CPU and onces again limited by the RAM speed.
And we're not there yet, because the user level drivers like graphics API drivers all need to constantly switch between kernel calls and userlevel calls. If you're interested in this terror: in Windows 7 open the task manager, go to perfomance -> view -> click "Show Kernel Times"and see how half the CPU load is red at allmost all times: this is (for example win32) API->Kernel calls. Oh yes we have to also digg through that shit.
I hope you'll understand now why your $99999 rig is just a little faster than a medieval aged Xbox360 that is directly programmed without all these layers of abstraction and constant CPU<->RAM operations. No these console ports are not shitty.