Revealed on stage during Apple’s iPhone 11 event, the A13 is claimed to have both the fastest CPU and fastest GPU of any smartphone or tablet, improving upon the power offered by the A12 used in the 2018 models. The 64-bit chip’s CPU and GPU are said to be 20 percent faster than the variants in the A12, with a variety of elements allowing it to perform over one trillion operations per second, some relating to machine learning.
Power and machine learning
The A13 is made up of many sections, but the main three are the CPU, GPU, and the Neural Engine. The CPU consists of two performance cores and four efficiency cores, with each used depending on the workload. The GPU contains four Metal-optimized cores, while the Neural Engine contains eight more.
Also buried within the CPU region are a pair of “Machine Learning Accelerators,” which are used to perform matrix multiplication, a calculation that is frequently used in machine learning. Apple says that the A13 will perform this calculation six times faster than the A12 Fusion. Its these accelerators that make the CPU reach the trillion-operations milestone.
A selection of areas Apple has improved in the A13 Bionic
Due to load balancing of the Apple-designed Machine Learning Controller, the machine learning models can be scheduled on the CPU, GPU, and Neural Engine depending on which would offer the best performance. The controller also does this while balancing the need to stay as efficient as possible, helping reduce the amount of power used.
As the controller takes all the decision-making out of where to process machine learning models at any time away from developers, this also simplifies the process for development.
Reduced power and better iPhone battery life
At the same time as providing more processing power than the A12, the A13 Bionic also pushes to reduce the amount of energy it requires to perform the calculations in the first place. For this generation, it has helped Apple reach multiple extra hours of battery life in the iPhone 11 Pro, rather than typical improvements of an hour.
Part of the saving comes from changes in how it produces the chips in the first place. Taking advantage of chip partner TSMC’s most recent commercial processes for creating 7-nanometer chips, described as an “advanced improved 2nd generation 7-nanometer transistor,” Apple has tailored each transistor for performance and power.
At the same time, the work has led to Apple squeezing 8.5 billion transistors onto the A13, up from 6.9 billion used in the A12.
Aside from the transistors themselves, as well as being more selective over what is used to perform calculations, Apple has also worked on improving the architecture.
The CPU, GPU, and Neural Engine are all more powerful, but power efficient
The use of hundreds of voltage domains on the chip gives Apple more control over what power is used, and when. By only turning on sections at a time when they will be used for processing, while leaving unused areas without power, this brings down how much energy is used in a calculation considerably to just what is required.
At an even smaller level, the use of hundreds of thousands of smaller domains allow for granular control over what gets power, ensuring only the smallest amount of logic in the chip is used for a process.
The work has resulted in vast power savings. The CPU’s two performance cores consume 30% less power, the four efficiency cores save 40%, the four GPU cores also save 40%, and the eight Neural Engine cores are 15% more power efficient.