AI is no longer the province of academics, novelists and filmmakers. It has gone mainstream, and the confluence of computing power, enhanced algorithms and mass data is rapidly enhancing AI applications in industry.
Robotic process automation tools can now effectively handle finance and administration processes. Telecommunications organizations, for example, use AI and machine learning to improve customer services with virtual-assistant chatbots to automate and scale support requests (such as set up, installation, troubleshooting and maintenance). In short, AI enterprise applications are enhancing business processes, improving client and customer interactions, and removing human error.
AI in industry
In 2018, over 160,000 people were injured or died on the roads of Great Britain. The UK is leading self-driving car developments, and passed one of the first pieces of specific legislation to allow for autonomous vehicles – and on a greater amount of national roads than anywhere else. The Department for Transport aims to have fully autonomous cars on UK roads by 2021. This will potentially save lives, reduce traffic accidents, and offer economic benefits, such as the creation of new jobs.
The first industrial revolution began in the UK, and so it is fitting that advanced AI processes are now providing British manufacturers with a leading edge in industry 4.0. AI, robotics and automation, and the industrial internet of things are allowing manufacturers to fully integrate ecosystems, use data to make better decisions, and automate production lines. The UK’s government-supported Made Smarter industry 4.0 initiative is helping manufacturers deploy advanced AI and similar enterprise technologies across the country. The GBP20 million scheme aims to boost revenue and productivity and strengthen the UK export market. AI industry applications like these have the potential to better integrate supply chains, reduce environmental impact, and increase job opportunities.
Advances in pharmacological and medical AI could lead to accurate diagnoses of diseases including cancer, heart disease and dementia. When established, the UK’s GBP250 million National Artificial Intelligence Lab will tackle National Health Service challenges and provide cutting-edge healthcare. And much like other enterprise solutions, AI is also automating administrative tasks to free up medical professionals for patient care.
Ethical responsibility in computational bias
Humans are prone to bias. Can and should machines be held to different standards? Is bias innate, or can it be designed out of a system? It is possible to design AI systems to accommodate bias within predetermined tolerance ranges in order to compensate for innate bias. However, this poses questions of moderation; the ethical responsibility shifts from the AI system to the individual or entity designing it and determining the tolerance levels. Current AI ethics research centers on the black-box theory of inputting data. To account for bias and assign responsibility for computations, the steps, parameters, and rationale need to be understood and – if necessary – clearly explained to the end-user. This is especially important with if-this-then-that programming, which arguably presents a greater risk of inadvertent bias than other forms. For example, UK car insurer Admiral inputted social media data into its firstcarquote engine; it was, however, withdrawn before launch due to inadvertent racial biases uncovered in the computations.
Where AI algorithms directly interact with end-users, or produce qualitative data (such as insurance quotes, credit scores, or mortgage applications), there is a clear need for accountability. This places a duty of care on organizations toward the clients engaging their services and the public consuming their content. Arguably, handing over processes and decisions to AI demands high standards and clear accountability policies because – once programmed – human oversight and redundancy are, for the most part, removed from the process.
Concerns over the AI accountability gap resurfaced in the news in April 2019. The UK government announced its intentions to introduce a duty of care amid criticism of, and growing pressure on, social media companies, as a result of cases like that of teenager Molly Russell. These highlighted the risk of harmful content and led to calls for effective regulation to protect against it. UK government ministers aired their proposal for legislation to hold social media bosses personally accountable for published content on their platform. In August 2019, the Financial Conduct Authority published “Artificial intelligence in the boardroom,” which highlighted the need for board members and senior managers to take responsibility for the major challenges and issues raised by AI.
Balancing innovation with regulation
Looking back to the 90s, internet service providers eventually convinced authorities that they would self-regulate their online space, thus avoiding the creation of an online-offline dual legal system that could have stifled innovation. In the global economy, organizations typically migrate from heavily regulated nations to those where legislation and taxes are tipped in their favor. This may present a situation where specific countries become favorable for – and later, world leaders in – certain AI branches, while proscribing other AI applications and uses. This is the case for blockchain technology and cryptoassets – part of another emergent and disruptive technology class – which now have a full spectrum of regulations across the globe. Many countries – including China and Russia – outright ban cryptoassets trading, and India may follow. Other nations – such as Canada, Sweden and Malta – welcome the innovation it represents and have designed legislation to attract enterprise. By 2022, we may see country-by-country AI regulations and national AI strategies that incentivize certain AI applications while banning others.
In the EU, the General Data Protection Regulation firmly places control of personal data in the hands of the individual. Article 22 of GDPR provides individuals with meaningful information on how automated decisions about them are made. In order to meet data protection requirements, organizations need to carefully plan the implementation of new AI applications. This is especially important for advanced algorithms used for decision-making, such as those used by insurance underwriters and for credit scores.
Shifting attitudes and changing values
AI’s relationship to society is researched by stakeholders such as the Turing Institute and the Centre for Data Ethics and Innovation. This research aids in distinguishing present-day AI – including advanced search algorithms, smart robotics, and chatbots – from popular culture to ease public concerns. AI is rapidly developing. Businesses are finding new ways to profit from its use while regulators, the media, and research groups are exploring its impact on privacy, individual rights, and society as a whole. During this period of rapid growth, businesses must be able to prove to regulators that they are in compliance with laws and regulations, including GDPR. Further, companies need to be held accountable for AI computations, system bias, and the consequences stemming from direct end-user AI interaction. The 2018 House of Lords report AI in the UK: Ready, Willing and Able says that the UK has a “unique opportunity to forge a distinctive role for itself as a pioneer in ethical AI.” Regulators must appreciate that adherence to ethical rules and principles surrounding AI cannot be handled at the discretion of organizations, and there have been numerous media stories that highlight this need, including recently announced plans for the UK communications regulator Ofcom to regulate harmful social media content and fine social media companies.
DLA Piper consultant Lord Tim Clement-Jones is the former Chair of the House of Lords Select Committee on AI and current Co-Chairman of the All-Party Parliamentary Group on Artificial Intelligence, and will be speaking at our European Technology Summit 2019 on October 15.
Explore the frontier of technology and its place in the future of enterprise at the fourth DLA Piper European Technology Summit on October 15, 2019, which includes a panel discussion on the regulatory framework likely to govern future innovation in AI-related technologies, robotics, automation and machine learning. In this session, eminent AI industry thought leaders from Accenture Applied Intelligence, Oracle, GlobalData, iGenius and the University of Turin will consider how legislation may influence the ethical basis through which people, organizations, investors and governments may design and deploy AI-driven machines at scale.
Visit the European Technology Summit 2019 website for more information and to register your interest.
More on AI from DLA Piper:
TechLaw Podcast: Future regulation for artificial intelligence
Artificial Intelligence: What we have seen so far in 2019 and what is to come…
Three predictions for the future of AI: it’s not evil, it’s the new electricity