You mean like the 60°C you pulled out of your hat?
This is an operating temperature that has been industry-standard since they developed the first PCs back in the 1980's. New components are added using new materials all the time, but nothing has been able to defeat high-temperature vs. lifespan degradation yet.
Not all components in the PC will be made using these state-the-art components. It serves no purpose for your CPU to safely run at 80°C if it
melts the housing around it, is there?
Not really sure there is tupperware in my computer
This comment is really thick. I've made mention several times that plastics are used in virtually all computer components. Tupperware is still one of the most durable and heat-resistant plastics ever created, often far beyond the grade that's used in computer tech.
But....but i thought to keep it below 60°C at all times?
Your're obviously eloquent and knowledgeable, still you make generalising statements and talk about camel sweat. Intel's specification lists 73°C as recommended max operating temperature
http://ark.intel.com/de/products/808...up-to-3_90-GHz
He is using a stock cooler, which are designed to target this specification.
Yes, it's never a perfect world. I also like to use my hardware and run games maxed-out if I can. Occasionally, a program may cause the processor to jump up a bit and start really pulling the voltage. I'm not worried about 2-3 seconds, here and there. When I see 68°C, however, I take note and take steps to avoid that scenario again.
And you are using words like "heat specification", "stock", "designed for", etc.. I do not think you understood my argument at all. Let me explain even more clearly. Not just for you; for everyone who wishes to read:
Businesses lie to make money. Happens all the time. One of the most common practices, which became rampant in the 1970's and '80's with automobiles, and then started cropping up all over US industries, is intentionally using
cheap parts and manufacturing techniques to ensure the lifespan of a product is
short. This ensures more units sold over time. Early personal computers back in the 1980's and 1990's would run indefinitely. You didn't have parts "burning out", parts "failing" after a lot of use, or motherboards "going on the fritz". (Sure it happened, but
very infrequently compared to the modern day.) The Commodore 64 that we purchased in 1983 (and had been stored in our attic since 1987) still turned on and ran just fine when my nephew dug it out again a few years ago. Why do modern computers only last, like, 3-4 years before their video card "dies"? Because parts are now built to be replaced.
As new tech is introduced, especially for gaming, the industry wants to ensure sales through rollover, upgrades, and claims that the next generation of hardware is capable of so much more. "Overclocking" and "cooling systems" began as hacking ventures, and originally, companies combated it fiercely. In the early 1990's (which used the exact same materials in computer components as today) companies across-the-board issued formal warnings that any sort of "over-clocking" would damage your hardware and void your warranty. That is not what the parts were made for, and no one has introduced a standard titanium composite to replace silicon wafers yet. Then, private companies started doing it -- your friendly corner "PC Repair Shop" would overclock your rig and add a few fans, and they would even offer their own warranty. Woah...nelly! The lawyers ran into work the next day! Lawsuits later, basically claiming that purposefully degrading hardware was an attack against the brand name of the computer company that originally built it, and those same computer companies started seeing how much money could be made with this "gaming PC" thing. Then, they started opening businesses focused on developing PCs that ran cool and provided performance without the need for overclocking or extra cooling --
because pushing hardware like that would damage it. But people continued to spend money, tweak, and overclock. So gaming companies sort of caved in and started offering "professional overclocking and cooling solutions". Eventually, the modern, accepted "Gaming PC" was born -- Alienware, Falcon-Northwest, CyberPower, etc..
But the hardware never changed. It's still made from the same materials that the experts who pioneered the PC industry back in the 70's built with their own two hands. And they claimed you could push it only so far without damaging the hardware. One of the biggest elements was that you wanted to keep internal temperatures (
core temperatures, in modern-speak) for your hardware below 60°C. That was a feat that was even more difficult back then.
Now that we finally have the tech to run safe and cool, businesses are encouraging people (or at least not discouraging them) to stress their hardware to the limit because it will increase sales drastically. Even better, add "accepted" or "professional" overclocking, and you cut down on the lifespan even more! Money-money-money!!!
Again, it's not "wrong" to do that, really. It's business. But I know for a fact that most people who spend a lot of money on an expensive gaming system or upgrade do not want to intentionally decrease the lifespan of their system (and heat-stressing parts decreases the lifespan by
years.) So when people misinterpret things like "maximum safe operating temperature", it bugs me when businesses simply allow that misinterpretation to fester and feed off of the ignorance of their consumers.
(I do take situations like this to heart, as I've been forced to deal with it through my job for over 20 years. Most people who read this [and that number will be small enough already!], will likely look at it and say: "Who really cares? Everyone's systems run hotter now -- it's just the way it is, gramps. Get with the times!" All well and good, unless you're the person that winds up frying your CPU 6 months from now with absolutely no money for a replacement. Then all of a sudden it makes sense. If it helps even one person, I'm satisfied.)
Im not saying he shouldn't get a better cooler to improve his temps, but the way you are argueing it might seem to him that his computer will fail or die because of this, which is generally not the case - provided temps do not rise any further.
Well, yeah. There's no guarantee of anything in life. I've owned and overclocked my own systems -- that's how I know most of this. Back in the mid '90s, I did a lot of work on (and killed) two systems while I was in university. Learned a ton about how it all works and just how deadly heat can be. Then I learned how to do it right when I purchased a Falcon-NW Mach V in 2004 and had a big talk with the technicians about the nature of overclocking and how my system just wouldn't need it. About how they all recommended against it if you wanted the best stability. And about how running cool would be better in the long-run. That PC is an AMD Athlon 64 with a Geforce 6000 nForce series, and it still runs today. (Once again, my nephew claimed it.)