[ARTICLE] ...more ARM, more fun?

Seems that Microsoft is entering into the hardware ARM scene too…

It’s interesting noticing that Microsoft is very prudent here. Unlike Apple that did all-in with ARM, Microsoft doesn’t seem to completely believe in a future where ARM is the only viable platform. Not for the moment, at least.
Nevertheless, it is a clear sign that the industry is moving towards a direction where RISC and ARM in particular may play a key role.

Well, I think you misread the situation with Apple. They went ARM because they own the entire platform, not because they think ARM is/isn’t the next big processor platform. They lost faith in Intel. Apple owns the M1 line design and it has optimised it to work well with legacy software - something that haunts Microsoft. I hope the chip they are using can handle the X86/AMD64 memory model - otherwise it will be lack lustre performance for anything running on the WOW platform they will undoubtably ship to make the ARM platform support legacy Intel apps.

Microsoft is a follower here. They tried and failed to make ARM work a number of times and this move is just “me too, I’m relevant!!”

1 Like

With all respect I didn’t. I’ve been using Apple hardware since ever and I religiously follow everything about Apple. I don’t see how I misread it.
Apple doesn’t own the platform, it’s still ARM based and licensed. They control the production process and the chip design, which is different.
What I mean by all-in is that they had a plan to transition to ARM in a determined timeframe across all the product line.
This must not be a surprise, though.
It happened with the transition from Motorola 68k to PPC and again from PPC to Intel. And to use your wording, they didn’t “own” the platform either at that time. They simply embrace a change and move forward as fast as possible. No regrets whatsoever. Microsoft didn’t make any statement about a full transition to ARM based processors leaving space to speculation on their true intentions about the future.

2 Likes

Microsoft always had ARM - they didn’t market it as heavily as Apple for the next new thing. A lot of discussion happened when there were more companies competing for the server/desktop market (m68k/PPC/Alpha era).

There are the Surface Pro X and Surface Studio. Microsoft consistently builds their OSes for ARM processors…

There are a few companies designing ‘open’ ARM-based professional workstations/laptops with high-end graphic solutions. Apple has a large developer team, broader hardware portfolio, distribution network, and nicer marketing to push ARM-based products to their clients… :wink:

Apple also built a ton on top of ARM (and has been for over a decade), so it’s more than just ARM designs. This is evident by Apple’s M1 being faster than other ARM chips. I had read somewhere that some people at ARM themselves were even surprised. Apple also had acquired PA Semi, adding to Apple’s knowledge of processors and silicon design.

1 Like

Well, I do strongly believe that the real “killer-app” (= the main reason of ‘big-players’ join) of ARM architecture is their - flexible - licensing approach:

Personally, I think the big players are trying to kill RISCV before it makes the big time. They use ARM to prop up the closed architectures so a truly open system doesn’t emerge.

I like ARM for its cheap simplicity. Not for heavy lifting of single-threaded workloads, however. CISC still rules that roost. I’d prefer a simpler CISC than x86_64 though.

My understanding is that they use only the instruction set, and not one of ARM CPU core designs. This means at the moment, they indeed pay a license to ARM, but only for the architecture, not for a complete core. And later, if they modify the core and start to drift away from other ARM CPUs, at some point they may even stop doing that.

In that sense, they own their platform. They can decide if it will continue to be ARM or if they will do something else. They can decide to wait for ARM to introduce new instruction sets, or do it themselves as non-standard extensions.

It makes sense for Apple to work that way, it’s one of their selling point that by designing both the hardware and software, they can outperform the Windows/PC world which is a little more diverse in terms of hardware, and they can also throw away the annoying legacy parts of the system a bit quicker.

Seems a strange thing to do? The only loser in RISC-V is ARM itself, since no one will be paying them royalties then. Everyone else has no reason to have a “closed”/patented base instruction set. In the future, if RISC-V works out, surely we will see people designing and licensing CPU core implementations, instruction set extensions, etc. Having the base instruction set open doesn’t prevent that.

ARM is not simple at all. Its platform specifications are more complex than x86 in some parts. There are at least 3 ARM instruction sets fully incompatible with each other.

2 Likes

That’s 100% correct, Apple licenses the instruction set but it’s just a matter of defining the term “owning”.
My personal standpoint is that they don’t own the platform (completely) because at least one of the links of the value chain is not completely under Apple’s control. From a business strategy perspective this may lead to different implications but it was not the key point of my post.
Suffice to say that Apple is now in a better position then it was at the time of PPC.
The point was that Microsoft take’s on ARM is not well defined, yet.
Although they have been building windows for ARM for years now, it seems to be in a perpetual unfinished or beta state and no claim has been issued about the future Windows architecture.

Apple moved to ARM because it allowed them to increase their profit margins via vertical integration. They went ARM because they already had huge investment and expertise in it. They had huge investment and expertise in it because back when they started this investment, RISC V did not exist. Second reason Apple went with ARM is they weren’t in much of a danger ISA-wise. Software in Apple ecosystem is deprecated fast aka updated fast to new hardware/software so they simply could do it.

Compare that to Microsoft whose single biggest most is gigantic catalogue of applications and services built on top of Windows. If they break backwards compatibility, their entire raison d’être disappears. MS also doesn’t see any extra profit from moving to ARM. All the profit gets claimed by ARM the company and OEMs who might pay less for off-the-shelf ARM SoCs, while MS gets to pay for maintaining the OS on another platform while seeing no money because their applications don’t exist there. They tried this with Windows Mobile, Windows Phone, Windows 8, then Windows 10 and it has worked exactly never.

So, while Apple has plenty of motivation to go ARM, Microsoft has none. Well, none other than me-too factor. So they are doing this the way anybody could expect them. Half-hearted and quarter-pocketed. This will be another did because off-the-shelf ARM SoCs are worse for personal computing than Intel is, not to mention neither hold a candle to Apple.

Even beyond that, right now, I see little promise for consumers to move to Windows on ARM. It makes sense for MacOS because there are no other options and you get a good hardware for the money. For Windows you get word hardware and worse software for pretty much same amounts of money. Either wait it out, or go x86. I hope the future sucks less for non-Apple customers.

2 Likes

I would think Microsoft would want to keep their toes in the ARM waters so that if ARM based servers become more prevalent, they can still be used with Windows Servers. My guess is that consumer ARM things are more like practice runs for them, and their interest is really in the server field.

That makes sense. What ticks me wrong is they aren’t putting their own chips in consumer machines. I understand it takes years to build up that kind of logistic, so its surprising they didn’t try to bootstrap the market by allowing anybody to install Windows onOEM boards. I.stead they made exclusivity agreement with Qualcomm, released bad machines that even MS probably knew were going to fail and generally the image of ARM on Windows is not very nice. For all their good hits in developer side (VSCode, WSL) they toraly missed this boat.

There’s also a business model shift going on at Microsoft. They can’t sell new versions of Windows because if they do that, people won’t buy them and keep running the old ones that work “good enough”. Which means less income, and having to maintain and support old versions of the OS forever.

So now they are distributing the updates for free. I think their main business now will be selling Windows licenses to computer manufacturers, but also their Office and Office 365 offer, in that area there is much less competition.

When exploring a new CPU architecture, it would make sense to do it first with a single machine and work closely together with the machine manufacturer to iron out the problems. I imagine cross-company teams were set up in one way or another so that Qualcomm employees got access to Windows sourcecode and/or Windows developers got access to Qualcomm hardware engineering teams for support.

They tried it earlier, with Windows NT4 which also didn’t really work, but also with Windows CE, which did get some success: in PocketPC handheld computers, but also in the Sega Dreamcast and a few other products. However, that isn’t supported anymore since 2013 and it seems the goal is/was to replace it with systems from the NT family (Windows 8, 10, 11, etc).

The cisc risc arm et al debated is really a decoder debate. Because they almost are all exclusively risc beyond the encoder front end of the CPU core.

So if apple developed a killer core, they can in theory map the decoder to a different instruction set very quickly and keep a lot of that core logic and performance.

As I understand it

As to Microsoft, I think they are looking at Autodesk and that’s why there’s office 365, they want a SAS model, revenue forevermore model.

You’ll own nothing and be happy about it

Not really. The decoder, it turns out, is the most complex part of the CPU nowadays. Indeed, most CPUs have more registers internally than what’s exposed in the instruction set. Which means the decoder does a lot more than decoding and executing instructions: it will select one of several ALUs, reorder instructions to use the CPU resources more efficiently, reassign registers to use the internal register set to its full capacity, and often also run multiple threads sharing most CPU resources (“hyperthreading” in intel CPUs, but most architectures have similar things now). If you add another “translation” layer at that level, you will pay a cost in performance which is what explains most of the performance difference between CPU families.

Moreover, the RISC vs CISC debate was about much more than that. It has impacts on compiler design as well. It goes something like this:

  • In CISC, you allocate a lot of your CPU resources to the instruction decoder. You have complex instructions that can do many things at once. As a result, you need to do less instruction fetches and your code tends to be more compact. But, for this to work, you also need much complex compilers that can detect patterns in the sourcecode and convert them to the most efficient instruction for the job.
  • In RISC, you have an instruction decoder that almost directly exposes the CPU internals, and is very simple. As a result you can have a very short pipeline (often only 2 stages). Since the decoder is simple, you now can allocate the resources (die space, power consumption, etc) to, for example, having a lot more CPU registers. Also, the compiler has less choices to make, so writing a compiler for these architectures is easier.

In recent days, there are not so many problems with CPU die space and power consumption. So we often get a “why not both?” approach. We can have a lot of registers, a mostly orthogonal instruction set (meaning the registers exposed in the instruction set are not specialized registers usable only in a few instructions only), and we also get all the advanced tricks that a CISC CPU can do like complex specialized instructions; and also out of order execution, multi threading, etc. So, the CISC vs RISC debate isn’t really applicable anymore. In particular ARM was initially rather a RISC machine, but over the years it added more and more instructions, so the instruction set isn’t really “reduced” in any way.

So in the modern world:

  • From the RISC side, we retain the orthogonal instruction set (all registers can be used in all instructions), the large number of registers, and the reasonably short pipeline (but usually a bit more than two stages)
  • From the CISC side, we retain specialized instructions (vectorized ones, for example), and the idea of performing complex operations at the instruction decoding and scheduling steps

We can afford this compromise because die sizes are ridiculously large now, the CPU manufacturers don’t fill the die at all, and the free space there is dedicated to having several megabytes of L2 and L3 cache memory (more memory than you had in the whole computer 20 years ago).

6 Likes

Last I’d read, 50% of the performance was beyond the encoder, but that’s been sometime ago.