By Mark Costlow, SWCP President

Adapted and updated from the SWCP Portal, January and February 2021

In a move that was largely unnoticed outside tech circles, Apple recently introduced a new range of laptops using their own CPU chips. Called Apple Silicon, the current model is designated “M1“.

It’s all in the chips

A computer’s CPU is the very core of its being. The CPU (Central Processing Unit) which runs the Operating System, is the essential heart of any computer, often called its “brain”. Changing to a different, non-compatible, CPU affects every part of the system. All of the other hardware must be modified, including memory, disks, graphics accelerators, and screens. Every piece of software must be adapted or rewritten from scratch, from low-level boot code that users never see, to the MacOS X Operating System, to every app that users run.

What Apple is doing is basically putting it all on a single integrated chip called, logically enough, a System on a Chip, or “SoC“. It’s an enormously complex and expensive undertaking. One must conclude that Apple is either crazy and looking for a way to dump billions of dollars, or they have a very good reason.

This is difficult and expensive enough to do for a personal computer, laptop, or smartphone, but what about the internet datacenters upon which they all heavily rely? Here is another market, one where Apple Silicon chips could potentially make an even bigger splash.

At first glance, it may seem to be all about costs. The CPU is one of the most expensive single parts of the laptop, so building them in-house will be cheaper than paying Intel for them. And there are follow-on savings from not having to interface with an outside company for such a critical component. They can create a much tighter and faster feedback loop between CPU design and overall computer design, allowing more iterations in each product cycle (which translates to faster innovation over time).

ARM vs. x86

Every computer needs a set of instructions to run its microprocessors. There are two basic architectures dominant in the market today: “ARM”, found in most smartphones and tablets, and an older one called “x86”, still used in the majority of laptops and PCs. While x86 is good for intensive computing, ARM uses less juice. Further, since the new M1 chips are based on the ARM architecture that Apple uses in their phones and tablets, they should reduce a lot of duplicated effort between those product teams.

I would argue that cost savings is not the reason for this change. It’s a pleasant side-effect, but the real goal is a more tightly integrated computer system that Apple can control from top to bottom. It’s a logical next step in Steve Jobs’ grand unified theory of consumer products. People worried that Jobs’ passing in 2012 would leave Apple rudderless and they would stray from his ideals. This move shows that they are following through on his vision and doubling down on his core ideas.

To better explain why this move is an extension of Jobs’ plan, and why it may have influence far outside of Apple, we need to back up a bit to look at the early iPhone and iPad development.

Steve Jobs’ master plan

The first iPhone, introduced in 2008, used a Samsung ARM chip repurposed from TV set-top boxes. It didn’t take long to determine that there was no chip on the market that could support products they wanted to build. They needed powerful chips that consume as little electricity as possible because battery life is such a big selling point for mobile devices.

When the first iPad was released in 2010, it came with news that Apple had designed their own CPU, the A4. They had licensed the ARM architecture, and built their own CPU using that instruction set. The iPhone 4, introduced later that year, also used the new A4 Chip. Since it still used the ARM architecture, it was backward-compatible with existing iPhone software.

Apple began adding new subsystems to their phones (and now tablets), such as the Motion processor to track movements, the “secure enclave” used by fingerprint scanners and payment solutions, and GPUs needed for ever-increasing graphics performance. Building these each as discrete subsystems which communicate with the CPU requires precious physical space and battery power. If they can integrate them, along with memory and storage, with the CPU into a System On a Chip, it is much more efficient in time, space, and power.

The typical argument against this approach is that level of integration makes it hard to advance one subsystem on its own since you can’t just replace a few parts in the assembly line to get the new features. You have to re-commit to a new SoC with the updated subsystems.

Apple overcomes these arguments in two ways. First, they WANT to be in control of every aspect of the system, so this plays right into that ethos. Second, iPhones (and later iPads) have always been on a relentless update schedule. Every September we get a new iPhone with a new revision of the CPU and other improvements. They are iterating fast enough that advancements to any one subsystem will not have to wait long for the next product release. The A4 chip introduced in 2010 has been improved every year and the 2020 version is called the A14.

For a decade, Apple has doggedly increased the power and capability of its phones and tablets, while simultaneously improving battery life. There are three ways to improve battery life:

  1. Better batteries
  2. Bigger batteries
  3. Lower power draw

Batteries continue to improve, but only incrementally. That leaves bigger batteries and lower power draw. SoCs help both, by preserving as much room as possible in the phone case and extending the runtime by sipping less power.

Contrast this with the PC and laptop market. PCs are meant to be plugged in to the wall. Laptops are often considered to be scaled-down PCs. They have batteries and can roam untethered for a while, but soon you’ll have to plug it in to the wall. That means you can get away with a more power-hungry CPU and less-integrated components. Heck, you can even put fans inside the laptop to keep that burning-hot CPU from melting the case.

Apple has struggled for years to advance the tech in their laptops, always striving for lighter and faster machines with longer battery life. But the power-hungry Intel chips at their core have hindered this progress. The tech press has reported that Apple has been frustrated by this situation for a few years. They granted Intel several chances to improve, and finally gave up to go it on their own. This narrative may fit the facts, but I think they’ve planned this split much longer, maybe as far back as 2012 after a couple of iterations of the A4, A5, A6 chips.

Intel-based laptops generally post high benchmark numbers, but they can be misleading. They are only attained while running at full speed, which they can’t do for sustained periods. When running very compute-intensive tasks like encoding a long video or full-screen 3D gaming, the fans engage to (noisily) keep the system cool enough to continue. After a little while the heat buildup is too much, and the system automatically scales back the compute speed to avoid physical damage (not to mention burned thighs).

Compare this to the new M1 Macbook Air using the new Apple Silicon chip. The first thing to notice is that it does not ship with a fan. That means passive cooling (i.e. a heat sink) is enough to keep it cool enough for sustained operation. The systems have only been out a few weeks, but so far all of the benchmarks indicate that this is true. The highly-integrated M1 SoC appears to perform as well as all but the most powerful (and expensive) Intel-based Macbook models.

All indications are that this is a huge change in compute efficiency, spoken of in the somewhat fuzzy metric of “performance per watt“. The idea is that you can certainly buy another laptop which is as fast as the M1 Macbook Air, but it will run hotter and louder, consume more power, and empty its battery much sooner.

Why this is a sea change

Few companies ever make a product that truly disrupts a market segment or whole industry. Apple is one of the very few that has done this several times. Their disruptive products are never the first of their kind, and yet they somehow capture the imagination and dominate the other players in the space, at least for a time. In some cases they create entirely new markets that nobody else believed existed.

  • The iPod, combined with iTunes and the deals Jobs forged with record labels, revolutionized portable music players. Before, the labels hadn’t found a way to sell their music that consumers found palatable, so music piracy was destroying their business. iTunes gave people a way to buy digital music that was easy and legal, and the iPod’s promise of “1,000 songs in your pocket” made them want to do it.
  • The iPhone created the smartphone market. When the iPhone came out in 2007, I had owned what passed for a smartphone for 4 years, and I still didn’t think we would ever browse the web on our phones. The iPhone showed that web browsing was possible, and it could be pleasant, not tedious. They dominated the market completely for a while, and now they share it with Android-based phones. All other players are gone (or at least their smartphone divisions are).
  • In 2010 the iPad introduced tablet computing, which previously only existed in very niche markets. The iPad was an instant hit and millions were sold in the first year and every year since. As with smartphones, they now share that market with Android based tablets.

So why do I think the new M1 machines represent a sea change? They don’t fundamentally do anything that other laptops don’t do. Right now, they just do them a bit faster, quieter, cooler, and maybe cheaper.

But the real significance is they have demonstrated that these “low-power” CPUs that we have considered only useful in tiny low-power devices can in fact be full-fledged computing monsters, if the SoC they are a part of is designed to fit the target user’s applications. For instance, Apple claims that Adobe Photoshop runs 50% faster on an M1 Macbook than an Intel-based Macbook.

The M1 is a big brother of the A14 chip in the latest iPhones and iPads. It is scaled up to take advantage of the larger battery and heat-dissipation available in a laptop form factor. For example, the A14 has 6 CPU cores, and the M1 has 8. The A14 has 4 GPU cores, and the M1 has 7 or 8 depending on the model. The M1 also has more RAM capacity (up to 16GB). But these are just matters of scale – the same thing a little bigger. M1 instantly leverages all the hard work that has been poured into making the A14 the speedy, low-power chip that it is.

Apple is not the first to produce laptops using ARM-based chips. However, the other offerings in this area have been unsatisfying and clunky. They haven’t achieved the tight integration that helps the M1 perform so well. The M1 also has another ace up its sleeve: emulation assistance.

A huge cost associated with changing the CPU used in an entire product line is the customers’ investment in software that only runs on the older system. That means the new chip needs must be able to emulate the old chip to avoid a user revolt. In fact, it’s this clunky emulation layer that has sunk some other ARM-based laptops trying to run software compiled for Intel-based chips.

Since Apple controls the M1 from top to bottom, they were able to add things to the basic ARM design which help it run Intel-based programs faster and more seamlessly than generic ARM emulators. The details of this get very technical so we will leave them for another venue.

Still, for this “new” approach to affect more than Mac users, it would have to be adopted by Windows PC manufacturers. However that doesn’t seem likely to happen right away. An off-the-shelf ARM design cannot be slapped into a PC to run Windows. Apple didn’t produce this SoC overnight. They’ve been working on it for over a decade. They have invested billions to build their own in-house chip design teams. That is not likely to be something the low-margin PC sellers (HP, Dell, Lenovo) have the cash or patience to execute without being confident of the result.

One scenario where this might happen is if x86 Windows can be run on M1 Mac laptops. It is not possible today, but there are people working on it now. If that comes to pass, those PC companies might start to see market pressure from Windows users opting to buy Mac hardware, which might push one or more of them to follow the same path.

But, there’s another market where they could potentially make a bigger splash: the datacenter.

Revolution by cost reduction

Energy costs are a huge factor in datacenter operation. Every bit of electricity fed into a CPU to do a calculation is converted to heat. Datacenters must then use more energy to shunt that heat away with air conditioning. Some have experimented with putting datacenters in naturally cold environments to lower operating costs. Microsoft even has an experimental datacenter on the floor for the North Sea, cooled by the water’s constant low temperature.

A few massive companies buy a large share of the world’s server computers. Amazon, Google, and Microsoft are the big three cloud providers, and Intel provides almost all of the CPUs they use. But at least two of them are already working on ARM-based SoCs, not so different from what Apple has done with the M1. In fact Amazon already has them in production. You can buy AWS services and specify you want “graviton2” CPUs instead of Intel.

Microsoft isn’t as far along, but they might arguably have a bigger impact. They will be in the best position of any company to make sure the ARM-based servers will run Microsoft Windows well. And of course they can change Windows to make it run better in that environment.

Apple doesn’t have as many datacenters as the other players, but they do have a large and growing cloud presence and are actively scaling up their datacenter holdings. One presumes they will leverage the M1 and its successors to run those datacenters at lower cost. If they can get the same computing power with only 20% of the electricity consumption, that will make a huge difference in datacenter economics.

Amazon and Microsoft might never sell these servers on the open market, but it seems clear that this trend could push Intel out of the datacenter, a market they have universally dominated for decades.

With Apple leading the way to put non-Intel CPUs on people’s desks and laps, Windows users are likely to start demanding the same cool-to-the-touch long-running machines from Dell, Lenovo, and HP. It’s not a stretch to think they could license CPU technology from Microsoft or Amazon to give these same features to their customers.

Keeping it close

Why some people will resist it: Compatibility is King. For decades, home and office computing has benefited from “PC Compatible” computer architectures. They all have the same type of CPU, and a modular structure that lets you add and remove parts at will.

Here’s a short list of parts you can swap in and out of a standard PC but you CANNOT change in a new M1 Mac:

  • The CPU
  • RAM (main memory)
  • Disk drive
  • GPU (Graphics controller)
  • Network adaptor

All of these are integrated parts of the M1 SoC. Many people, including this writer, lament the fact that Apple computers are all but unrepairable. If something goes wrong, there are no authorized repair shops, only Apple. And they are not likely to “fix” your computer, just replace it, at possibly very high cost to you and the environment.

On the other hand, the history of computing is a series of coalescing black boxes. As a specific technology matures, it is made physically smaller, and eventually subsumed within the black box of the “system” as a whole, no longer a distinct, replaceable, modifiable appendage. This has happened to Math Co-processors, I/O Ports, WiFi, Ethernet, Memory Controllers, and Graphics Controllers.

When this happens, innovation on that device slows to a crawl. The only people working to improve it are the ones now responsible for integrating it into the larger system. The consumer has no choices, and any improvements must wait for an entire system replacement. Generally this coincides with the natural slowing of innovation for that device. For example, nobody has consciously considered the Math Co-processor in their PC since 1995. The trade-off for this reduction in flexibility is increased speed and lower cost.

In a very few cases, there remains a market for separate devices. The most obvious example is Graphics Processors. Several niche markets (gaming, cryptocurrency, machine-learning) have fueled the continued development of high-end graphics cards that can be swapped in and out of a PC independent of the other system components. People who don’t need the extra capability use the included on-board graphics, while those that need it have the flexibility to add it.

Apple’s M1 SoC includes all of these things in a closed system. You can’t add or change any of them. Once you buy the machine, its configuration is frozen. To add memory or disk space or get better graphics, you have to buy a whole new machine.

The truth is that this model works just fine for the majority of users. Even many of those who dislike the idea of it will admit that they don’t actually make any upgrades to their laptop over the course of its natural life. But it still irks the inner nerd to give up the ABILITY to upgrade it.

That is the trade-off that will play out in the marketplace over the next few years. Is the extra speed and efficiency of a highly integrated SoC worth it to give up the freedom to add the latest graphics card every year?

Along with inflexibility is the inability to repair. Together these might make other players more like Apple. Their devices may also become proprietary black boxes that will not work with anything outside their ecosystem. And thus, more expensive for users.

Regarding the Internet of Things, I don’t think this will have too much effect. IoT devices already use super low-power (and are not very computationally-powerful either), because that’s all they require. After all, they just need to be able to talk to WiFi and turn a few bulbs on and off – just enough to be useful to hackers.

It remains to be seen how all this will affect security. One hopes that ARM devices will not be subject to the same kinds of flaws that Intel has had exposed in the past couple of years, but no one knows yet. A flowering of different manufacturers comes with the double-edged sword of a more diverse supply chain: consumers (in this case, the engineers that fill datacenters) will have to decide whether a given product is robust enough or if the maker is not yet mature. But by the same token, if those Intel security flaws had been worse, the present monoculture could lead to a spectacular meltdown.

Conclusion

The blow that Apple just dealt Intel may not be fatal, but it could well be deadly. Apple is not the first to start the move away from Intel, but they are the most visible. They’ve never had a huge market-share of the home and office PC market, but they have an outsized mind-share. Their success with the M1 SoC will inspire copycats at home, in the office, and in the datacenter, which may add up to a thundering herd running away from Intel toward faster, cooler, cheaper alternatives.

Then again, Intel has been on the ropes before. Perhaps this will sharpen their focus and we’ll see amazing innovations from them in the next couple of years. Let’s wish them luck!