Days of future past…
There are movies I enjoyed in childhood that strongly date themselves, presenting visions of a possible future on dates that are now in the past. The ultimate example is Stanley Kubrick’s ‘2001: A Space Odyssey’, depicting the arrival of commercial passenger space flight and sentient machines 21 years ago. Other examples include the wonderfully realised dystopian vision of Ridley Scott’s ‘Blade Runner’ (set in Los Angeles in November 2019), the flying DeLorean and ‘Mr. Fusion’ generator of ‘Back to the Future: Part II’ (taking us “forward” to the Hill Valley of 2015) and the war with the machines in ‘The Terminator’ and ‘Terminator 2: Judgement Day’ (where Skynet was brought online and triggered a nuclear war in 1997).
The technologies shown in these sci-fi touchstones might not have been realised by the advertised dates, but we’re getting very close – and a number of developments suggest that 2024 might be the year that reality catches up with these sci-fi futures.
Coming soon to a reality near you…
If we look at the future technologies around which the plots of these movies are constructed, a number are within touching distance. There is a view among futurists that we tend to overestimate the progress that will be made within two years, and underestimate the progress that will be made in ten. Falling straight into that trap, the following developments are on the horizon and might be with us within a mere 24 months…
Fusion power: Whether it avoids the need to use a lightning strike to deliver the 1.21 GW of power for time travel, or it provides the propulsion to visit a monolith in orbit around Jupiter, fusion is the preferred sci-fi power source. In today’s carbon-conscious world, fusion offers the promise of a safe, carbon-neutral and practically limitless supply of electricity. So far, fusion reactors have yet to produce more power than is input, meaning they are not truly generating power. Reaching and passing break-even is the goal. By far the biggest fusion power project, ITER (the multi-national tokamak project being built in France) is targeting December 2025 for first plasma, and is predicting it will eventually achieve a 10x yield on power input (or Q=10 in industry terms). A number of fusion start-ups are racing to beat ITER to exceed break even. In March 2022, British fusion start-up Tokamak Energy achieved a milestone of a 100 million degrees Celsius plasma in its ST40 experimental reactor. With Tokamak Energy also having sustained plasma for more than 24 hours in an earlier reactor. This paves the way for upgrades to ST40 that could potentially see the reactor running for an extended period and beating break-even before the end of 2024…
Commercial space: PanAm might not be trading any more, but the idea of taking a commercial flight to a space station (all to the tune of ‘The Blue Danube’ waltz) is an enduring image. Some might argue that the era of commercial space started with paid multi-millionaire tourist flights aboard government-operated craft, or more recently SpaceX’s Falcon 9 / Crew Dragon. Others might cite the sub-orbital exploits of certain high profile billionaires which gained widespread publicity in 2021. However, the promise of dramatically lower costs per kilogram to orbit that will come with fully reusable spacecraft are yet to be realised. However, with SpaceX expected to carry out the first orbital flights of its ‘Starship / Super Heavy’ two-stage fully and rapidly reusable rocket later this year, that system may be crew qualified and even being used as part of humanity’s return to the moon by the end of 2024…
Sentient machines: Given how often sentient machines are presented as turning on their creators, humanity’s fascination with them might reveal a deeply masochistic streak in our species. For every gun-wielding T-800, bone-breaking Nexus-6 replicant or creepily murderous HAL-9000, there is a friendly and supportive Artoo unit or Pinocchio-like Commander Data. Great strides have been made in artificial intelligence in recent years, with AI models scaling from millions to billions of parameters tuned via the learning process. We are reaching an inflexion point… our AI models might shortly exceed the parametric capacity of the human brain. In parallel, a wide array of specialised AI hardware has been developed, with different companies taking different approaches.
With current AI techniques based on neural networks relying heavily on massively parallel matrix multiplication, accelerating these “matmul” operations is an area of focus. Companies like Lightmatter are exploring a move from electronics on silicon to photonics, computing with light. This has the potential to allow very high speed processing without the heat dissipation challenges attendant with silicon chips and Lightmatter claims its photonic processors achieve 7x the compute density as the ‘benchmark’ DGX-A100 AI accelerators.
Sticking with silicon, various companies are exploring ways of delivering lower power solutions that will allow AI in a much wider range of devices. Two notable examples are Brainchip’s akida processors and Mythic’s analog (or ‘analogue’ in British English) AI processors. Brainchip’s akida takes a ‘neuromorphic’ approach, mimicking the spiking of neurons in the human brain. By contrast, Mythic’s chips take advantage of the electronically simpler designs for multiplication and addition operations in the analogue domain, for a different approach to low power AI inference solutions.
None of these solutions promise anything approaching sentience however. For that, we must look elsewhere. Having recently announced the ‘Bow’ wafer-on-wafer upgrade to its already astonishing Colossus Mk II intelligence processing unit (or IPU), Graphcore has stated that it is working on a £100m AI supercomputer that it calls the ‘Good Computer’. Named after computer pioneer Jack Good (who, amongst other accolades, was consulted by Stanley Kubrick about the design and representation of HAL-9000), this machine is squarely aimed at running AIs which match and exceed the parametric capacity of the human brain. The Good Computer will take AI models from the roughly 100 billion parameter range of today, to 100 trillion parameters or more. If our sentience and consciousness are emergent properties of the sheer neurological complexity of our brains, this machine might be amongst the first to display anything near to recognisable consciousness. When is it expected to be completed and online? You guessed it, 2024…
Outpacing the law…
With sci-fi breakthroughs on the horizon in many fields, law and regulation will have to be revisited in light of these new developments.
Some will be straightforward, but necessary, extensions of pre-existing rules. For power generation, aviation and aerospace, developments in fusion power and commercial space ventures are an extension of existing regulatory regimes. With frameworks that already provide for fission power and space launches, it isn’t too difficult for the law to keep up with technology in these fields.
In the field of AI however, technology is racing ever further ahead of regulation. Whilst legislators grapple with rules designed to address the potential ethical, trust and oversight challenges of today’s AI, tomorrow’s AI will present a different and much darker prospect. In the EU, the draft AI Regulation is in circulation, looking at the protections required for humans when interacting with AI systems, particularly the various prohibited or high-risk AI use cases that the draft regulation defines. Other jurisdictions are taking a more piecemeal approach, instead opting to regulate for specific use cases. One example of this more domain-centric approach are the controls in place in New York around use of AI and algorithmic processes in hiring and recruitment.
History tells us that legislation takes time to debate and enact, and often even longer to be enforced. Whilst no fixed timelines have been set for the EU AI Regulation, expectations are that the legislative process itself might take another year, and in general the EU tends to give an additional 18 months to 2 years following the enactment of legislation before it comes into force. On that timeline, it might be that the EU AI Regulation is not itself in force before the end of 2024…
Even if it were though, does it, or any of the currently planned AI laws, go far enough to recognise and protect all relevant parties? Laws under discussion are universally framed around protecting humans as an unspoken axiomatic assumption. That assumption may need to be revisited. If there is a plausible prospect of machine consciousness arising in the medium term, should we not also consider what legal recognition and protection those intelligent machines might require?
Have you tried turning it off and turning it on again…?
One need only look to the treatment of what we currently refer to as “AIs” in a range of contexts to see the abuses that might be wrought on intelligences which are denied ‘human’ rights. Whether being rude to voice assistants or murdering non-playable characters in videogames, we are not accustomed to having sympathy for the machine. This perhaps is hardly surprising, as in many places a failure of public empathy allows monstrous circumstances to be visited upon human beings, who we know have similar intellectual and emotional responses to ourselves, with little or no protection. What hope then is there for kindness or concern to be felt for anything so different to us?
For the earliest machine intelligences with even a glimmer of consciousness, hopefully a spirit of parental care from the researchers working with them will protect them. It is nevertheless important, before commercialisation strays too far in the direction of exploitation, that as a society we consider what legal protections ought to be given to the children of our intellect.
After all, at a certain point, resetting a conscious machine might be akin to murder.
The next 24 months…
As sci-fi author William Gibson famously said, “the future is already here, it’s just not very evenly distributed”. The businesses that have the best prospect of thriving as sci-fi becomes reality are those that embrace the future early.
If you’re looking to stay ahead and implement AI, contact Gareth Stokes or your usual DLA Piper contact. To assess your organisation’s maturity on its AI journey in (and check where you stand against sector peers) you can use DLA Piper’s AI Scorebox tool, available here.