# How on earth we've got to this point of hardware development stagnation?



## SmOgER

In the past the GPUs and especially CPUs had lots of power and thermal margin. New hardware was focused on improving architecture rather than pushing/clocking chips.
Nowadays the chips are pushed to it's limits both thermally and in regards to clock speeds. It got to the point where Apple developed it's own architecture which started as ARM chip for mobile devices and is now competing with desktop CPUs using silicon M chips. Just think about it. That development progress difference between Apple and Intel. 
I mean, that's sad.


----------



## o1dschoo1

SmOgER said:


> In the past the GPUs and especially CPUs had lots of power and thermal margin. New hardware was focused on improving architecture rather than pushing/clocking chips.
> Nowadays the chips are pushed to it's limits both thermally and in regards to clock speeds. It got to the point where Apple developed it's own architecture which started as ARM chip for mobile devices and is now competing with desktop CPUs using silicon M chips. Just think about it. That development progress difference between Apple and Intel.
> I mean, that's sad.


Cause 5ghz as been the limit in clocks for a long long time. Thermal limits plus electromigration issues cause this. Above 4.5ghz on any modern chip you will notice they start to require huge jumps in voltage. It's better to increase ipc as higher clocks = more heat plus more voltage which then makes electromigration more of a issue. Ontop of this the high all core speeds are insanely hard to keep tdps low. 10 core example 5ghz on a 7900x is around 350w tdp. 10900k at the same is like a 250w load depending on voltage. Intel really found out the limits when they tried to push pentium 4 to 5ghz plus stock.


----------



## SmOgER

And? That's exactly what I mean. They are pushing architectural limits rather than developing architecture itself. 
Now at the same time Apple releases M1 which scores around 8000 in Cinebench MT R23 @ *14W *() . I mean.. what the **** intel was doing all this time while Apple was developing ARM?


----------



## Peanuts4

This is nothing, hop back a few years ago to when we were stuck on DX9 for over a decade. At least buying more than 4 cores is normal.


----------



## o1dschoo1

SmOgER said:


> And? That's exactly what I mean. They are pushing architectural limits rather than developing architecture itself.
> Now at the same time Apple releases M1 which scores around 8000 in Cinebench MT R23 @ *14W *() . I mean.. what the **** intel was doing all this time while Apple was developing ARM?


Arm can't run x86 software so it's a totally different ball game.


----------



## Paradigm Shifter

A combination of factors. Physics and backwards compatibility being the primary two, really.

I for one don't really want to go back to the days when buying a system from vendor X meant that you could only run software provided by that vendor, or compiling absolutely everything from source, often with lots of platform specific fixes/patches. Leave that to consoles - and even they have moved to the same basic architecture with the last two generations!

As well as Rosetta2 performs for the Apple M1 silicon, its little better than a bandage - I really do doubt Apple will support it long term... although perhaps I am doing them a disservice.

Intel had a fully 64-bit architecture long before AMD. It was called Itanium, and while it had many advantages, the lack of decent backwards compatibility and high cost meant adoption was low. Apple is actually working from a very interesting position for this: their devices are fashion accessories, as well as being functional. Even if the M1 silicon hadn't been so great (and really, only time will tell how well it scales, which is part of the problem for x86) I think Apple would have recouped their costs regardless from those who buy Apple because it's Apple, and no matter the cost. That said, Apple also have the ability to see what someone else has done (a little too early) and polish the idea for sale. See: MP3 players, tablets, "smart" watches, "smart" phones...

There are lots of other architectures still out there, not just ARM... I quite liked SPARC myself... but Intel (and to a lesser extend, AMD, as they are the only surviving silicon manufacturer with an x86 license) have had a stranglehold on the market. Performance, support, unscrupulous business practices, poor strategy from competitors (CELL should have been way more popular)... all have had an effect to a greater or lesser degree.

TL;DR: physics imposes hard limits, but the "soft" limits of backwards compatibility are actually just as hard to overcome.


----------



## 113802

They were focusing on storage, 5G, mobile SoCs. When Skylake was released they mentioned they were switching their focus to mobile. Then 4 years later we see Lakefield. 






Future Intel chip tech will sacrifice speed gains for power efficiency


To keep up with the demands of a connected world, future Intel chips may focus on energy efficiency over absolute computing power.




www.pcworld.com


----------



## Paradigm Shifter

WannaBeOCer said:


> They were focusing on storage, 5G, mobile SoCs. When Skylake was released they mentioned they were switching their focus to mobile. Then 4 years later we see Lakefield.
> 
> 
> 
> 
> 
> 
> Future Intel chip tech will sacrifice speed gains for power efficiency
> 
> 
> To keep up with the demands of a connected world, future Intel chips may focus on energy efficiency over absolute computing power.
> 
> 
> 
> 
> www.pcworld.com


And they've sold their storage division to SK Hynix, abandoned 5G, abandoned mobile SoCs... so basically just poor business decisions. They also pumped money into the Compute Stick (abandoned), Edison (abandoned) Itanium (abandoned)...

I just hope oneAPI sorts its act out and stays around for long enough to become a half decent competitor for CUDA, as AMD can't seem to maintain focus on that front either: OpenCL, HCA, HSA, ROCm... ROCm is _still_ a mess, which isn't helping things at all.

It says something about how powerful Intel is, that they can make that many major missteps and not go bankrupt - Hell, still make a profit!


----------



## ILoveHighDPI

o1dschoo1 said:


> Arm can't run x86 software so it's a totally different ball game.


I think the general point is we might have reached a fork in computer tech that thoroughly obsoletes X86.


----------



## o1dschoo1

ILoveHighDPI said:


> I think the general point is we might have reached a fork in computer tech that thoroughly obsoletes X86.


and rewrite every single bit of windows lmao....


----------



## MaxHughes

SmOgER said:


> In the past the GPUs and especially CPUs had lots of power and thermal margin. New hardware was focused on improving architecture rather than pushing/clocking chips.
> Nowadays the chips are pushed to it's limits both thermally and in regards to clock speeds. It got to the point where Apple developed it's own architecture which started as ARM chip for mobile devices and is now competing with desktop CPUs using silicon M chips. Just think about it. That development progress difference between Apple and Intel.
> I mean, that's sad.


Dual channel ram to Quad and Quad to 8 channel sounds like an improvement to me. Core Duo to 64 cores sounds like an improvement to me. 500/500MB SSDs to gen4 7000/6600 NVMe drives sounds like an improvement to me. A 64GB Ramdisk 8000/8000 on a gen3 motherboard sounds like an improvement to me. 40CU to 60 CU to 80CU to 160 CU on next gen 7000series Radeon GPUs sounds like an improvement to me. A few guys can O/C a stock chip to anybody can buy an X model that's got the snot clocked out of it out of the box sounds like an improvement to me.


----------



## Asmodian

SmOgER said:


> Now at the same time Apple releases M1 which scores around 8000 in Cinebench MT R23 @ *14W *() . I mean.. what the **** intel was doing all this time while Apple was developing ARM?


Patents, backward compatibility, and money.

Arm was developed as a very power efficient architecture. It is easier to scale it up to a higher performance but lower power part than it is to scale x86 to lower power while keeping the performance.

If creating an entirely new architecture did not require redoing a crazy amount of software then we could have much better CPUs today, unfortunately that isn't how it works. We have to build on what already exists for it to be useful. x86 is good today because you can run software designed for x86 on it, not because it is a great design for a modern architecture.

This is one of the reasons the CPU startups and in-house projects are getting more popular today. We (humans, e.g. Intel+AMD) have pushed x86 really far past its original design so there is a lot of room for improvement if we drop backward compatibility and a huge demand for compute performance at lower power levels.


----------



## Imglidinhere

SmOgER said:


> In the past the GPUs and especially CPUs had lots of power and thermal margin. New hardware was focused on improving architecture rather than pushing/clocking chips.
> Nowadays the chips are pushed to it's limits both thermally and in regards to clock speeds. It got to the point where Apple developed it's own architecture which started as ARM chip for mobile devices and is now competing with desktop CPUs using silicon M chips. Just think about it. That development progress difference between Apple and Intel.
> I mean, that's sad.


What hardware stagnation are you referring to exactly? True, Intel has hit a few snags, and 5Ghz seems to be their literal limit for frequency, but that's absurdly high for frequencies regardless of the technology. It's worth noting that AMD has made stellar strides generation over generation. I have been consistently floored by what I've witnessed, generational gains nearing that of what Qualcomm was managing, but on x86 CPUs. I upgraded from the Ryzen 5 2600 to the 3600 and saw a market 30% upgrade in multi-core performance. I was expecting SOMETHING but nothing to this degree, and that was back in January last year!

I can understand your point of stagnation if applied to Intel. I guess I can at least. It's more a matter of they'd lost the talent necessary to make substantial improvements. For starters, they had Jim Keller for a few years before he jumped ship and they've recently rehired the guy that was the foremost engineer that designed Nehalem and started them on their path to glory. I'd say to give it two years and we'll probably have Intel truly back up top again with AMD nipping at their heels... though it IS worth noting that Lisa Su brought AMD back from the brink and has turned the company into something else entirely, and she's made it abundantly clear that they will continue to strive for higher and higher performance CPUs with each generation. They can't afford to slow down.

Lastly, Apple's M1 CPU is POWERFUL. It's so good that even with x86 emulation, it's handily matching more mid-range offerings from AMD and Intel, which is nothing short of impressive, not to mention it's single core IPC is through the roof. All coming from an ARM-based CPU. While, yes, it IS being geared toward iOS and such, it's still exceedingly powerful outside of its own environment and the iGPU in that thing is nothing short of *potent*.


So in conclusion, if you thought CPUs were going to somehow break the laws of physics and reach higher and higher clock speeds, you'd be mistaken. Architectural improvements are how you gain in performance, and you HAVE to improve IPC as you shrink to a smaller fabrication node. After a point clock speed will have to decrease due to the nature of how the laws of physics operate. I wouldn't be surprised if 4Ghz becomes the new super high-end in the coming three to four years. In any case, I guarantee that any CPU you buy right now will happily smash your X58 setup at 4GHz, so CPU options are plenty viable.  (I mean that in a good way!)


----------



## Imglidinhere

WannaBeOCer said:


> They were focusing on storage, 5G, mobile SoCs. When Skylake was released they mentioned they were switching their focus to mobile. Then 4 years later we see Lakefield.
> 
> 
> 
> 
> 
> 
> Future Intel chip tech will sacrifice speed gains for power efficiency
> 
> 
> To keep up with the demands of a connected world, future Intel chips may focus on energy efficiency over absolute computing power.
> 
> 
> 
> 
> www.pcworld.com


I was using laptops at the time and I remember people looking at Kaby Lake when it first launched and were all freaking out about how they weren't improving at all. In the mobile department, you bet your butt they'd made substantial improvements to power management! Heck, they'd made such substantial improvements that they were actually offering real quad cores, for the first time, for their mobile i5 line up with Skylake and better optimized versions of the same chips with KL! So yeah... Intel's made some good choices over the years as I'm sure they saw they couldn't readily improve 14nm much further and were betting that 10nm would be right around the corner anyhow, so why not improve efficiency in the meantime? I mean... what did they have as competition at this point? Piledriver? Please... xD


----------



## bwana

It's not just hardware that has been a boondoggle.









Classic Programmer Paintings


Twitter: @progpaintings Painters and Hackers: nothing in common whatsoever, but this is software...




classicprogrammerpaintings.com


----------



## Mad Pistol

Apple has always kept their products in a walled garden where they control 98% of the dev process for their products. With their shift to ARM on all of their products, they now control 100% of the dev process for new products. It's this level on control over their platform that allows them to reinvent themselves from time-to-time, which means that if they find a better tech to power their products, they can go after it with very little drawbacks from consumers utilizing their products.

The problem with all x86-based products is as has been explained earlier in this thread. Each time a new product is launched, it also has to support all previous versions of said product as well. This creates insane complexity and horrible power efficiency relative to performance. That's the reason why Intel/AMD/Nvidia have always relied on node shrinks to keep power in check for newer products. Unfortunately, we're now bumping up against the laws of physics when developing new nodes, so we need to go back to the drawing board as far as transistor design goes. There are several technologies coming that may give us more performance without the drawbacks of silicon-based transistors, but as of right now, the technology is pushed to the ragged-edge because there's nothing else that can be done; the current SOI designs are reaching their limits.

So where do we go from here? I guess we'll find out soon.


----------



## o1dschoo1

Mad Pistol said:


> Apple has always kept their products in a walled garden where they control 98% of the dev process for their products. With their shift to ARM on all of their products, they now control 100% of the dev process for new products. It's this level on control over their platform that allows them to reinvent themselves from time-to-time, which means that if they find a better tech to power their products, they can go after it with very little drawbacks from consumers utilizing their products.
> 
> The problem with all x86-based products is as has been explained earlier in this thread. Each time a new product is launched, it also has to support all previous versions of said product as well. This creates insane complexity and horrible power efficiency relative to performance. That's the reason why Intel/AMD/Nvidia have always relied on node shrinks to keep power in check for newer products. Unfortunately, we're now bumping up against the laws of physics when developing new nodes, so we need to go back to the drawing board as far as transistor design goes. There are several technologies coming that may give us more performance without the drawbacks of silicon-based transistors, but as of right now, the technology is pushed to the ragged-edge because there's nothing else that can be done; the current SOI designs are reaching their limits.
> 
> So where do we go from here? I guess we'll find out soon.


Where we go from here? Actually start coding stuff for multi core usage and make use of all these cores. Improve cache speeds core latency.


----------



## Digitalwolf

SmOgER said:


> It got to the point where Apple developed it's own architecture which started as ARM chip for mobile devices and is now competing with desktop CPUs using silicon M chips.


I'm not sure why this is supposed to be a "thing". It wasn't all that long ago that Intel had this amazing desktop cpu because of mobile development. Then there was this amazing diversity at the consumer level because that chip existed... oh wait.

Stagnation I guess is different subjectively. The only stagnation I personally see is based on companies trying to push sales and some segment buying "new" stuff they never fully take advantage of. Just like the Auto Industry... etc etc

Or are we just all bored waiting for widely available consumer Quantum computers. Where we can turn one bug state into two at the same time!


----------



## o1dschoo1

What people don't realize is processors are insanely powerful as is right now. If you think about it intel is about to drop a bomb anyways. Rocket lake dropped to 8 cores for a reason and that reason is new hedt platform. I'm willing to bet the lowest processor is gonna be 10 core as they didn't want mainstream stepping on hedt sales for workstations/ super high end rigs.


----------



## ILoveHighDPI

o1dschoo1 said:


> and rewrite every single bit of windows lmao....


That's not a bad thing though.
They're already trying to switch to UWP, just right now there are practically no benefits to consumers. If they can show that a new hardware fork performs much better it might actually gain some traction.
Whether or not right now is actually the best time to develop a new PC ecosystem is another question. Give it 10 years and people will have some pent up demand from looking at Apple devices.


----------



## o1dschoo1

ILoveHighDPI said:


> That's not a bad thing though.
> They're already trying to switch to UWP, just right now there are practically no benefits to consumers. If they can show that a new hardware fork performs much better it might actually gain some traction.
> Whether or not right now is actually the best time to develop a new PC ecosystem is another question. Give it 10 years and people will have some pent up demand from looking at Apple devices.


Rewrite every bit of software over the past 20 years too. No old games will work nothing. Your talking about a start from scratch thing.


----------



## Ashura

o1dschoo1 said:


> Where we go from here? Actually start coding stuff for multi core usage and make use of all these cores. Improve cache speeds core latency.


This^
Hardwares are plenty powerful. Its the Software side that needs optimization.


----------



## warpuck

I think a Qualcomm Snapdragon would work fine as a base for a notebook or a desktop. BUT you would need Linux for it to be useful.
The 820 and later 64 bit can run Win 10
The downside.
There are not many games and consumer commercial software that go the Linux way.
Linux is a bit more slimmer and efficient. But it does have one problem for commercial development open source.
That has a history that goes all the way back to MSDOS and Windows 3.1.
There was encryption used in MSDOS that needed to there in order for 3.1 and 3.11 Windows to boot,
After booting Windows it had no other function.


----------



## o1dschoo1

warpuck said:


> I think a Qualcomm Snapdragon would work fine as a base for a notebook or a desktop. BUT you would need Linux for it to be useful.
> The 820 and later 64 bit can run Win 10
> The downside.
> There are not many games and consumer commercial software that go the Linux way.
> Linux is a bit more slimmer and efficient. But it does have one problem for commercial development open source.
> That has a history that goes all the way back to MSDOS and Windows 3.1.
> There was encryption used in MSDOS that needed to there in order for 3.1 and 3.11 Windows to boot,
> After booting Windows it had no other function.


There's wine and all that jazz but I've found 3d performance suffering under linux


----------



## kairi_zeroblade

when is 128 bit computing coming?


----------



## o1dschoo1

kairi_zeroblade said:


> when is 128 bit computing coming?


Never


----------



## Dogzilla07

kairi_zeroblade said:


> when is 128 bit computing coming?


No sooner than a 1000 years from now, give or take a bit.


----------



## 113802

kairi_zeroblade said:


> when is 128 bit computing coming?


We've had 128bit computing since the early 2000s, they just don't make sense for general computing. Even if they were beneficial for general computing, AMD and Microsoft will delay the adoption just like they've done with 64 bit. Main reason we're now seeing 64 bit apps is due to developers not wanting to maintain 2 versions after Apple dropped 32 bit support with Catalina. We'll soon see the same if Apple's ARM chips succeed.

Backwards compatibility is the cause to stagnation.


----------



## ILoveHighDPI

o1dschoo1 said:


> Rewrite every bit of software over the past 20 years too. No old games will work nothing. Your talking about a start from scratch thing.


That's what Apple is doing.
If the performance is there the market will have to follow eventually.


----------



## kairi_zeroblade

WannaBeOCer said:


> We've had 128bit computing since the early 2000s, they just don't make sense for general computing. Even if they were beneficial for general computing, AMD and Microsoft will delay the adoption just like they've done with 64 bit. Main reason we're now seeing 64 bit apps is due to developers not wanting to maintain 2 versions after Apple dropped 32 bit support with Catalina. We'll soon see the same if Apple's ARM chips succeed.
> 
> Backwards compatibility is the cause to stagnation.


Why is Apple being the standard..lol..


----------



## Digitalwolf

kairi_zeroblade said:


> Why is Apple being the standard..lol..


Can you imagine if Apple really was the standard. I would probably retire again and stay out of the PC Market. Clients convinced me to semi unretire... but Apple? Ya no thanks. Yes I'm biased... can you imagine this site if Apple was THE standard.


----------



## o1dschoo1

ILoveHighDPI said:


> That's what Apple is doing.
> If the performance is there the market will have to follow eventually.


Thats even more reason not to. Apple=garbage


----------



## 113802

Digitalwolf said:


> Can you imagine if Apple really was the standard. I would probably retire again and stay out of the PC Market. Clients convinced me to semi unretire... but Apple? Ya no thanks. Yes I'm biased... can you imagine this site if Apple was THE standard.


Depends what market you're referring to because Macs are definitely the industry standard in many markets I manage and to be honest I prefer managing them over Windows PCs. 

People on here mainly prefer to build and tweak their own computers which is why many don't discuss their prebuilt Macs on here. When I had my Vega 64/Radeon VII I was running macOS on this desktop. I would still be using macOS on my desktop if I didn't upgrade to a Titan RTX which doesn't have any official driver support which is why I switched to Solus. Even if macOS became a standard there wouldn't be much to discuss once Apple retires x86 unless they bring back an affordable Mac Pro for home use. I doubt we'll be able to port ARM macOS to other ARM hardware. 

Now if we refer to the gaming market we have to look at the strategic move Apple did by bringing practicality their entire iOS app library(with approval) to macOS. They also made their Metal API developer tools available for Windows which will open the door for current Windows game developers to develop for iOS/macOS. 









Metal Overview - Apple Developer


Metal powers hardware-accelerated graphics on Apple platforms by providing a low-overhead API, rich shading language, tight integration between graphics and compute, and an unparalleled suite of GPU profiling and debugging tools.




developer.apple.com


----------



## o1dschoo1

WannaBeOCer said:


> Depends what market you're referring to because Macs are definitely the industry standard in many markets I manage and to be honest I prefer managing them over Windows PCs.
> 
> People on here mainly prefer to build and tweak their own computers which is why many don't discuss their prebuilt Macs on here. When I had my Vega 64/Radeon VII I was running macOS on this desktop. I would still be using macOS on my desktop if I didn't upgrade to a Titan RTX which doesn't have any official driver support which is why I switched to Solus. Even if macOS became a standard there wouldn't be much to discuss once Apple retires x86 unless they bring back an affordable Mac Pro for home use. I doubt we'll be able to port ARM macOS to other ARM hardware.
> 
> Now if we refer to the gaming market we have to look at the strategic move Apple did by bringing practicality their entire iOS app library(with approval) to macOS. They're also made their Metal API developer tools available for Windows which will open the door for current Windows game developers to develop for iOS/macOS.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Metal Overview - Apple Developer
> 
> 
> Metal powers hardware-accelerated graphics on Apple platforms by providing a low-overhead API, rich shading language, tight integration between graphics and compute, and an unparalleled suite of GPU profiling and debugging tools.
> 
> 
> 
> 
> developer.apple.com


yup hackintosh is gonna be dead fairly soon.


----------

