Apple has revealed most of the major details for its new M2 processor. The reveal was full of the usual Apple hyperbole, including comparisons with PC hardware that failed to disclose exactly what was being tested. Still, the M1 has been a good chip, especially for MacBook laptops, and the M2 looks to improve on the design and take it to the next level. Except, Apple has to play by the same rules as all the other chip designers and it can’t work miracles.
The M1 was the first 5nm-class processor to hit the market back in 2020. Two years later, TSMC’s next-generation 3nm technology isn’t quite ready, so Apple has to make do with an optimized N5P node, a “second-generation 5nm process.” That means transistor density hasn’t really changed much, which means Apple has to use larger chips to get more transistors and performance. The M1 had 16 billion transistors, and the M2 will bump that up to 20 billion.
Overall, Apple claims CPU performance will be up to 18% faster than its previous M1 chip, and the GPU will be 35% faster — note that Apple’s not including the M1 Pro, M1 Max, or M1 Ultra in this discussion. Still, I’m more interested in the GPU capabilities, and frankly, they’re underwhelming.
Yes, the M2 will have fast graphics for an integrated solution, but what exactly does that mean, and how does it compare with the best graphics cards? Without hardware in hand for testing, we can’t say exactly how it will perform, but we do have some reasonable comparisons that we can make.
Let’s start with the raw performance figures. Not all teraflops are created equal, as architectural design decisions certainly come into play, but we can still get some reasonable estimates by looking at what we do know.
As an example, Nvidia has a theoretical 9.0 teraflops of single precision performance on its RTX 3050 GPU, while AMD’s RX 6600 has a theoretical 8.9 teraflops. On paper, the two GPUs appear relatively equal, and they even have similar memory bandwidth — 224 GB/s for both cards, courtesy of a 128-bit memory interface with 14Gbps GDDR6. In our GPU benchmarks hierarchy, however, the RX 6600 is 30% faster at 1080p and 22% faster at 1440p. (Note that the RTX 3050 is about 15% faster in our ray tracing test suite.)
Architecturally, Apple’s GPUs look similar to AMD’s in terms of real-world performance based on teraflops. The M1 for example was rated at a theoretical 2.6 teraflops and had 68 GB/s of bandwidth. That’s about half the teraflops and one third the bandwidth of AMD’s RX 5500 XT, and in graphics benchmarks the M1 typically runs about half as fast. We don’t anticipate any massive architectural updates to the M2 GPU, so it should be relatively similar to AMD’s RDNA 2 GPUs.
Neither AMD nor Apple have Nvidia’s dual FP32 pipelines (with one also handling INT32 calculations), and AMD has Infinity Cache that should at least be similar in practice to Apple’s “larger L2 cache” claims. That means we can focus on the teraflops and bandwidth and get at least a ballpark estimate of performance (give or take 15%).
The M2 GPU is rated at just 3.6 teraflops. That’s less than half as fast as the RX 6600 and RTX 3050, and also lands below AMD’s much maligned RX 6500 XT (5.8 teraflops and 144 GB/s of bandwidth). It’s not the end of the world for gaming, but we don’t expect the M2 GPU to power through 1080p at maxed out settings and 60 fps.
Granted, Apple is doing integrated graphics, and 3.6 teraflops is pretty decent as far as integrated solutions go. The closest comparison would be AMD’s Ryzen 7 6800U with RDNA 2 graphics. That processor has 12 compute units (CUs) and clocks at up to 2.2 GHz, giving it 3.4 teraflops. It also uses shared DDR5 memory on a dual-channel 128-bit bus, so LPDDR5-6400 like that in the Asus Zenbook S 13 OLED will provide 102.4 GB/s of bandwidth.
And that’s basically the level of performance we expect from Apple’s M2 GPU, again, give or take. It’s much faster than Intel’s existing integrated graphics solutions, and totally blows away the 8th Gen Intel Core GPUs used in the last Intel-based MacBooks. But it’s not going to be an awesome gaming solution. We’re aiming more for adequate.
One other interesting item of note is that Apple makes no mention of AV1 encode/decode support. AVI is backed by some major companies, including Amazon, Google, Intel, Microsoft, and Netflix. So far, Intel is the only PC graphics company with AV1 encoding support, while AMD and Nvidia support AV1 decoding on their latest RDNA 2 (except Navi 24) and Ampere GPUs.
Apple also detailed its upcoming MetalFX Upscaling algorithm, which makes perfect sense to include. Apple uses high-resolution Retina displays on all of its products, and there’s no way a 3.6 teraflops GPU with 100 GB/s of bandwidth will be able to handle native 2560 x 1664 gaming without some help. Assuming Apple gets similar scaling to FSR 2.0 or DLSS 2.x, the M2 GPU could use a “Quality” mode and render 1706 x 1109, upscaled to the native 2560 x 1664, and most people wouldn’t really notice the difference. That’s less than 1920 x 1080, and certainly the M2 should be able to handle that well enough.
Let’s also not forget that this is only the base model M2 announced so far. It’s being used in the MacBook Air and MacBook Pro 13, just like the previous M1 variants, but there’s a good chance Apple will also be making more capable M2 solutions. The M1 Pro had up to 16 GPU cores compared to the base M1’s 8 cores. Doubling down on GPU core counts and bandwidth should boost performance into the 7.2 teraflops range — roughly equivalent to an RX 6600 or RTX 3050 in theory. Doubling that again for an M2 Max with 40 GPU cores and 14.4 teraflops would put Apple in the same realm as the RX 6750 XT or even the RX 6800.
For an integrated graphics solution running within a 65W power envelope, that would be very impressive. We still need to see the chips in action before drawing any final conclusions, however, and it’s a safe bet that dedicated graphics solutions will continue to offer substantially more performance.
Bottom Line
Apple’s silicon continues to make inroads against the established players in the CPU and GPU realms, but keep in mind that targeting efficiency first usually means lower performance. Dedicated AMD and Nvidia GPUs might use 300W or more on desktops, but the same chips can go into a laptop and use just 100W while still delivering 70–80% of the performance of their desktop equivalents.
Without hardware in hand and real-world testing, we don’t know precisely how fast Apple’s M2 GPU will be. However, even Apple only claims 35% more performance than the M1 GPU, which means the M2 will be quite a bit slower than the M1 Pro, never mind the M1 Max or M1 Ultra. And that’s fine, as it’s going into laptops that are more about all-day battery life than playing the latest games.
The combination of a reasonably performant integrated GPU combined with MetalFX Upscaling also holds promise, and game developers going after the Apple market will certainly want to look into using upscaling. That should deliver at least playable performance at the native display resolution (after upscaling), which is a good starting point. We’re also interested in seeing how the passively cooled MacBook Air holds up under a sustained gaming workload.