Unfortunately the death of Moore’s law means they need to get more speed from somewhere to have new products to sell so I expect this’ll continue for high end products.
Otoh, at least low end products get dragged upwards too, amazing how much cpu you can get in 32w
Applications have been growing more and more thread-aware so we’re likely going to see core counts continue to increase for some time.
They might focus more on branch prediction and reducing the penalty for mispredicts, which won’t be as impressive as raw clock speed or IPC but could significantly improve performance on real world workloads. Maybe some form of deep learning or statistical analysis, or even JIT compiling commonly called routines directly to microcode to skip instruction decoding.
With enough cores, low-end products might end up seeing the iGPU ditched in favor of a return of software rendering to make more room on the die.
We’ll probably see more instruction set extensions that accelerate AI workloads or other commonly used algorithms, like what already exists for SHA-2 and AES.
Unfortunately the death of Moore’s law means they need to get more speed from somewhere to have new products to sell so I expect this’ll continue for high end products.
Otoh, at least low end products get dragged upwards too, amazing how much cpu you can get in 32w
Applications have been growing more and more thread-aware so we’re likely going to see core counts continue to increase for some time.
They might focus more on branch prediction and reducing the penalty for mispredicts, which won’t be as impressive as raw clock speed or IPC but could significantly improve performance on real world workloads. Maybe some form of deep learning or statistical analysis, or even JIT compiling commonly called routines directly to microcode to skip instruction decoding.
With enough cores, low-end products might end up seeing the iGPU ditched in favor of a return of software rendering to make more room on the die.
We’ll probably see more instruction set extensions that accelerate AI workloads or other commonly used algorithms, like what already exists for SHA-2 and AES.