Synaptic fantastic

The only example of an information processing system capable of human-level general intelligence that we have (at present) is the human brain itself.

This ‘wetware’ contains about 80 to 100 billion neurons, and about 100 trillion (or 1000 times as many) synapses. With each synapse being a ‘trainable’ link within the brain, they are the nearest equivalent to the trainable parameters (weights and biases) within our current state of the art neural network based AIs. Those 100 trillion trainable parameters are a huge number compared to the mere hundreds of billions of parameters within today’s state-of-the-art natural language processing models. Our brains have over 500 times as many parameters as a large natural language model like GPT-3.

Even more remarkably, our brains achieve all of this within less than 1.5 kg of material, in a volume of around 1,250 cm3 – that’s roughly comparable to the weight and volume of a 16-inch professional grade laptop. In power consumption terms the brain is remarkably efficient – using only about 20 watts at its peak – a tiny fraction of the 100 watts that a ‘pro’ laptop might use.

Digital overhead

The next generation of AI models are being designed to match the number of trainable parameters in a human brain. OpenAI’s next generation natural language model, GPT-4, is expected to have around 100 trillion trained parameters. On the hardware side, Graphcore’s announcement of the ‘Good Computer’, named after British computer pioneer I. J. ‘Jack’ Good, is an IPU-based system designed to support training and running models in the >100 trillion parameter range, up to around 500 trillion parameters. In its publicity materials announcing the Good Computer, Graphcore has explicitly compared the parametric capacity of the machine to the human brain.

Working through a few back-of-an-envelope calculations based on Graphcore’s statement that the Good Computer will contain 8192 next-generation IPUs, and a diagram showing 582 racks’ worth of equipment (which appears to be 8 x 64 POD128 racks, 64 storage racks and 6 networking racks), that would have the total machine volume at around 834 m3, and a power draw likely to be in the megawatt range. Therefore even if an AI running on that machine might be parametrically equivalent to the brain, biology still beats computer science by many orders of magnitude on both size and performance-per-watt.

There are many reasons why our electronic brains are not (yet?) able to match biology for size and power efficiency. At least one contributing factor is that we’re using large numbers of structures in silicon (in the form of thousands of transistors) to create each trainable parameter in our AI models. Where a single synapse represents a trainable parameter in a brain, the silicon equivalent is massively more complex. A single trainable parameter in silicon requires the memory hardware to store a numeric value of the relevant parameter (the weight of a link between neurons or the bias associated with a neuron), plus all the logic circuits to perform the multiplication and addition operations on that weight or bias together with the data being processed using the AI model. If our AI uses a (relatively low precision) 16-bit number to store the weights and biases, then even just a 16-bit add operation will require around 720 transistors. Consider that the adder circuitry will be accompanied by an order of magnitude more transistors to manage the memory caches associated with looping through the addition operation to carry out multiplication operations, and we can see that a digital AI requires thousands of silicon features per parameter.

This enormous additional complexity means that at any stage we can ‘look inside’ the machine to read out every single value within the entire neural network, and interrogate exactly how different connections are influencing the outputs from the systems. Even the best EEGs are unable to provide an equivalent whole-brain synapse level view of the activations within a brain, so we have a clearer window into an AI than we do into our own thought processes.

Another feature of traditional silicon substrate digital computing hardware is a system-wide clock cycle. While some specialist AI compute modules propagate computation through the processor over several clock cycles, the system keeps time to the gigahertz drumbeat of a core clock. Synapses in the human brain do not fire according to a single core clock cycle, but activations propagate from one synapse to the next as spikes occur, with there being no fixed cycle across the system. Where each clock cycle (or multiple thereof) of a computer represents a discreet system-wide state, there is no equivalent single organ-wide time interval for our brains.

A non-digital AI future

Our current paths toward building AIs that are parametrically equivalent to the human brain are going to result in enormous building-scale machines requiring a small power station to run. At one time we might have simply looked to Moore’s law to shrink transistors and improve performance per watt to solve the size and power challenges for us, but most commentators now agree that we’re approaching the lower limits of how small features on silicon can be. If the silicon-based children of our intellect are to match the size and performance-per-watt characteristics of the brains that dreamt them up, then radically different architectural approaches may be needed.

One path under active exploration is analogue (or analog for my North American friends) computing. Instead of using digital circuitry to precisely add numbers together, analogue circuits use far simpler structures to ‘add’ by accumulating charges, or ‘multiply’ by amplifying charges. Using these techniques, the thousands of transistors needed to create a single neuron in a fully digital AI can be reduced to a circuit with a handful of transistors. This, in turn, has the potential to allow circuits to be orders of magnitude smaller and more power efficient. This might get our brain-equivalent AIs from building-sized to room-sized or smaller.

Other analog(ue) approaches involve moving from computing with electrical signals to computing with light. Structures created using lasers and microscopic interferometers allow the equivalent of mathematical operations to be carried out in the analogue domain, all with lower power and a much lower level of waste heat than silicon. Not having to exhaust so much heat potentially allows for deep 3D photonic devices to be constructed where a silicon equivalent would present serious heat-driven engineering challenges. While still early in the development cycle compared to the decades-old maturity of silicon fabbing, photonic computing may be a path from room-sized to much smaller and computationally denser machines capable of running AI.

More interestingly, we might even speculate that moving from machines with a fixed clock cycle to machines based on the same cascading spikes of activity seen across our own synapses could be part of the solution to developing a form of machine consciousness. If our own self-awareness is an artefact of a continuous wave of information processing washing back-and-forth through the neurons of our brain, then mimicking this clock-free approach to neuron activation would presumably be a prerequisite for a similar self-awareness in our AIs.

At odds with regulation

Emerging regulatory standards for AI have strongly emphasised explainability and transparency as key requirements for AI systems. The EU’s draft AI Act is one of the more well-progressed pieces of AI-focussed legislation, with the fourth compromise text (Oct ’22) in circulation at the time of writing. While the number of edits in the text indicates that many areas of the legislation are still in flux, among the more settled areas are the requirements around technical documentation (Art. 11), record keeping (Art. 12) and transparency (Art. 13).

Where the AI is considered ‘high risk’ based on its use case or the fact of being used as part of a safety system, then the Act sets out a clear set of requirements to properly document the technical aspects of the AI system, with obligations to maintain, amongst other things:

  • a description of the hardware on which the AI system is intended to run (Art. 11 and para 1 (e) of Annex IV);
  • a detailed description of the elements of the AI system and of the process for its development (Art. 11 and para 2 of Annex IV);
  • the design specifications of the system, namely the general logic of the AI system and of the algorithms (Art. 11 and para 2 (b) of Annex IV);
  • details of key design choices including the rationale and assumptions made (also Art. 11 and para 2 (b) of Annex IV); and
  • a description of the system architecture (Art. 11 and para 2 (c) of Annex IV).

There are separate obligations requiring that high-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent (Art. 13).

How might these various requirements impact any potential shift in AI research and development away from digital systems to the analogue domain? One likely outcome of any such move is that it becomes potentially more difficult to read out with high precision the exact values within the system at any point. The combination of input data with the analogue representations of the weights and biases is less easy to quantify without adding a lot of extra hardware to enable the quantification (and indeed inevitable quantisation) of the analogue values, which potentially erodes the very size and power benefits sought. Further, if the system is designed to operate without a single clock pulse controlling it, exactly what is meant by a single point-in-time ‘state’ of the machine is not particularly easy to define. An analogue AI operating on that basis would have at least these qualities in common with a brain: we would have to resort to additional external scanning and measurement equipment in order to try to peer into it and divine its inner workings.

Thinking about the regulatory requirements for transparency and explainability, the challenge of a move from a fully-knowable (albeit inefficient) digital architecture to a more efficient-but-mysterious analogue architecture emerges. While these various requirements for technical aspects of an AI to be properly documented and for the AI’s operation to be transparent do not necessarily preclude a move toward smaller and more power-efficient AI’s using analogue methods, meeting the regulatory requirements with an inherently less ‘knowable’ system architecture creates additional challenges. Novel methods of interrogating and interpreting the operation of such systems will be required if they are to meet the high bar set by the regulators.

This is not simply an EU issue. While this section has focussed on the EU’s draft AI Act, the choice has been made simply because that draft legislation is in a more advanced form than emerging legislation in other jurisdictions. White papers and policy statements from many other governments outside the EU, and the recent US ‘AI Bill of Rights’ all cite similar concerns with transparency and explainability of AI systems. Therefore anyone looking to explore the benefits of analogue AI elsewhere is likely to face a similar challenge as legislation starts to bite wherever they happen to be based.

Digital to Analogue Conversion

AI systems in that 100 trillion parameter range which have numbers of trainable elements on a scale equivalent to the human brain will be with us in the very near future. However, we don’t need to wait for brain-scale digital computers to emerge before analogue AI can yield real-world benefits. Research into analogue computing solutions to bring size and performance-per-watt benefits to AI are already well under way.

Silicon-based analogue AI solutions are already available from Mythic AI, whose Analog Matrix Processor (AMP) is capable of significantly outperforming equivalent all-digital solutions. Mythic AI is marketing these AMP solutions as bringing a new class of capability to lower-powered ‘edge’ devices, bringing on-device AI solutions to areas where it would not otherwise be practicable.

In the photonic computing space, Lightmatter has its Envise chip, which provides a photonic computing solution for AI applications. The fact that a single photonic device can simultaneously compute with different frequencies of light allows these processors to potentially significantly outperform silicon electronic machines.

These different analogue technologies are already unlocking real-world benefits for AI researchers and product developers alike.

Next Steps

You can find more views from the DLA Piper team on the topics of hardware, AI systems and the related legal issues on our blog, Technology’s Legal Edge.

If your organisation is deploying AI solutions, you can undertake a free maturity risk assessment using our AI Scorebox tool.

If you’d like to discuss any of the issues discussed in this article, get in touch with Gareth Stokes or your usual DLA Piper contact.