“Yeah, but have you seen the Amiga version?”

This part-question, part-taunt characterised many discussions of the latest game releases in the playground as I was growing up. In the UK, home computers rather than videogame consoles dominated. The kids with 8-bit systems like the ZX Spectrum, Amstrad CPC 464 and Commodore 64 were comprehensively outgunned on memory, processor speeds, sound, colours-on-screen and other metrics by those lucky enough to have one of the brand new 16-bit systems like the Commodore Amiga or Atari ST.

The Amiga version would always look, sound and play better…

The rise of x86

In the ‘70s and ‘80s, a lot of the excitement around computing focussed on the hardware itself. Self-confessed geeks knew the processors at the heart of each system: the MOS 6502/6510 in the Apple II, BBC Micro and C64; the Zilog Z80 in the ZX Spectrum and Amstrad CPC 464; and the mighty 16-bit Motorola 68000 in the Apple Macintosh, Commodore Amiga and Atari ST. We knew what additional hardware made each system unique – the memory, special graphics hardware or sound hardware. Why did the C64 have such distinctive sound? – the SID chip. Why could the Amiga move so many sprites around the screen? – the Agnus co-processor.

But incompatibility was the problem – software for one system wouldn’t run on another. This zoo of different hardware couldn’t last.

The rise to ubiquity of the IBM PC compatibles, powered by ever faster and more complex variants of Intel’s 8086 processor or compatible processors from others, reduced the importance of hardware. By the mid ‘90s, IBM PC compatibles running Microsoft Windows on x86 hardware accounted for the overwhelming majority of personal computers in use. With the release of Intel Macs in 2005, and the near-ubiquity of x86 server hardware powering the internet, it seemed like the war was over. General purpose computing would use x86 hardware, and innovation would be in software and via the then-current “web 2.0” phenomenon.

For business, coalescing around a single platform de-risked capital investments in IT hardware. Compatible hardware could be purchased from a wide range of competing vendors. Major enterprise software systems could all run on x86 servers/be accessed from x86 desktops. If there were a decision to switch software systems in future, then the new one would run on the current hardware estate. It is not over-simplifying too much to say that enterprise hardware became a matter of refreshing every few years to keep up with the Joneses on the megahertz front.

But hardware hadn’t become boring. Far from it, there were other areas of development.

Developing in parallel

Gamers remained sensitive to what hardware could deliver. Questions about what a game looked like on a particular computer gave way to what it looked like when played on a particular graphics card. Who cares about central processing units (CPUs) when you have graphics processing units (GPUs) that are superb at the very specific tasks associated with 3D graphics? From the mid-‘90s onward, this led to innovation in new massively-parallel processors. These GPUs first found use doing the matrix calculations for 3D vector graphics, but were later repurposed for other parallelisable computing tasks, including training neural networks and crypto mining.

These GPUs made the same power/heat trade off as traditional CPUs, with faster performance being achieved by higher clock speeds, and consequentially increasing the heat generated. Ever more elaborate cooling solutions became de rigueur, with heat sinks, fans and water cooling systems (all tastelessly glowing with RGB lighting in the most expensive gaming rigs) keeping cutting edge systems within operational tolerances.

They could have called it!

In the mobile computing space, the meaning of the word ‘phone’ started to change. Even in the late ‘90s, a ‘phone’ meant a fixed connection to most people, and was distinguished from cellphones (or ‘mobiles’ in the UK). Cellphones then became smartphones as they adopted functions from the nascent ‘PDAs’ (personal digital assistants) of the day. By 2010, when absolutely everyone had an iPhone or an Android-powered handset, smartphones were simply ‘phones’. A new generation would speak disdainfully of ‘landlines’ as anachronisms from their parents’ generation.

In order to power these supercomputers in our pockets, a very different family of CPUs had to be developed – ones that eked out the most computation from the least energy. Instead of raw megahertz or gigahertz, these new processors would be measured in ‘performance-per-watt’. On this measure the ‘Complex Instruction Set Computing’ (CISC) architecture of the x86 platform was easily leapfrogged by ‘Reduced Instruction Set Computing’ (RISC) processors.

Back in the ‘80s, British computer firm Acorn had developed a RISC processor, releasing it first as an upgrade for the company’s own BBC Micro computers. The ‘Acorn RISC Machine’ or ‘ARM’ CPUs were then used for Acorn’s ‘Archimedes’ range of computers. Such was the success of this new CPU that the company’s name evolved through Advanced RISC Machines to just ‘arm’, and its business model switched from selling hardware to licensing its CPU designs. arm’s CPU designs would truly come into their own in the smartphone era, with CPUs based on those designs now in use in more computing devices than any other architecture.

In recent months we’ve even seen the received wisdom that CPUs built with a focus on performance-per-watt must be slower than those focussed on absolute speed (and we’ll just add cooling) overturned. The most recent mobile CPUs in the most popular phones are now outpacing the CPUs in many laptops in speed benchmarks.

Jack of all trades…

The continued rise of games and phones would point towards the new hardware that we now see eroding the old x86 supremacy. The very general purpose flexibility that led to its rise to dominance are proving to be an Achilles’ heel.

New computing tasks, like training and running AI systems, can be done far better on specialist hardware than general purpose CPUs. Even the era of repurposing GPUs for that task is over: companies like the UK-based tech unicorn GraphCore have custom-designed ‘Intelligence Processing Units’ (IPUs) that achieve astonishing results. Google has deployed its own ‘Tensor Processing Unit’ designs (TPUs) for its TensorFlow based AI offerings. Even in the mobile space we see large areas of the die for the latest system-on-a-chip (SOC) designs given over to ‘neural engines’ and the like in order to accelerate on-device AI.

Environmental concerns mean that the efficiency of computing is ever more important. Even in the datacentre, performance-per-watt is now a critical metric. The default ‘datacentre equals x86’ assumption is being challenged by more efficient offerings with, arm-in-the-datacentre offerings growing rapidly as SESG becomes a high priority for more businesses.

And the advantages of don’t end there. For the most efficient results, application specific integrated circuits (ASIC) are the direction of travel. Having custom silicon means that power and space on the die are not given over to unused functions. A custom solution can be more secure, both by design and because hacks designed to exploit weaknesses in the most popular platforms will not work on ASIC hardware. In the internet-of-things / ‘Industry 4.0’ era, these secure, specialised and ultra-low power solutions are finding a range of new niches, and leading to greater demand for services from custom silicon designers and silicon fabs that can produce the packaged chips.

Choose wisely

This hardware renaissance creates a myriad of new opportunities, but also creates new risks. Enterprise IT managers who have been used to simply procuring the cheapest or best value x86 offerings across the estate are suddenly facing new choices. Previously the potential for a future change in direction within the software stack was low risk because the new software would definitely work on the same underlying (x86) hardware. All that changes if you buy special hardware to run a particular software stack, and then new software requires an entirely different hardware platform to run.

At the device level, choosing between different devices which support different communications protocols, security standards and software control schemes adds to the complexity.

Get all of this right, and you have a new, massively more performant, massively more efficient and secure estate that will help to accelerate business success. Get it wrong and there will be wasted investment, a nightmare of integration issues and a business that can’t keep up with the competition.

Next steps

We’ll be examining many different aspects of this hardware renaissance in this series of articles, leading up to the DLA Piper European Technology Summit on 5th October. Look out for more on how the hardware renaissance affects things from an intellectual property, disputes, employment and commercial perspective in the coming months on Technology’s Legal Edge.

We’ll have a panel session discussing the implications of this exciting time in hardware at the European Technology Summit. To find out more and register to attend the summit, visit the event website.

If you’d like to discuss any of the issues discussed in this article, get in touch with Gareth Stokes or your usual DLA Piper contact.