“How will this affect my job?”
“It could never do my job!”
We hate to disagree, but it is almost certainly the case that at least part of your current role could be performed by an AI.
The discussion of the possible impact of AI on the workforce is being played out in many places. AI’s impact on the world of work has an enormous number of possible legal implications, from bias to oversight, and from monitoring to discrimination. While existing laws (in the fields of employment, data protection and anti-discrimination, etc.) and forthcoming laws (including dedicated AI laws) will address AI’s impact on workers to some extent, the question of whether jobs will be wholly replaced rests on primarily commercial and economic factors. That concern underpins these questions about the impact on jobs – often not without some degree of self-interest!
Here we consider the extent to which such concerns are well-founded.
On 27 March 2023, OpenAI, best known for the development of ChatGPT, published a working paper examining the labour market impact potential of large language models (LLMs). The paper’s title, “GPTs are GPTs” refers to its finding that generative pre-trained transformers, a version of LLMs that works by predicting the next word in a given piece of text, exhibit traits of general-purpose technologies (think steam power, electricity, or the internet – technologies that impacted many aspects of industry), indicating that they could have significant economic and social implications. The paper supports this conclusion with key findings of the potential impact of LLMs on the labour market. The paper focuses on the US labour market, and finds that introduction of LLMs could affect at least 10% of the work tasks of approximately 80% of the US workforce, and that within that 80%, 19% of the workforce may find at least 50% of their tasks affected. The paper also finds that, with software and tooling built on top of LLMs, between 47% and 56% of all US worker tasks could be completed significantly faster at the same level of quality.
The above findings indicate that LLMs have the potential to unlock considerable benefits for business, but also suggest a high probability of LLMs significantly disrupting the labour market, with corresponding impacts on the workforce. This will be particularly acute in those in areas where a substantial amount of tasks are exposed to LLMs. However, the paper acknowledges that, in practice, the adoption and impact of LLMs will be affected by a variety of factors external to the inherent capabilities of LLM technology. This article explores some such factors.
In this piece we will briefly consider the potential impact of: regulation; internal oversight and accountability; depreciation; workforce, PR and ethical considerations; commercial benefit of implementing LLMs; and changing market expectation as a result of the introduction of LLMs into the production process. We will then argue that each of these factors has the potential to counteract the presumed impact of LLM technology in reducing overall employee headcount.
Factors Potentially Dampening the Impacts of LLMs on the Workforce
One factor shaping the impact of LLMs on the labour market will be regulation, which is likely to affect the implementation of LLM technology on a number of levels. First, regulation can prohibit the use of LLMs, whether in general or in specified cases, or impose a variety of obligations on organisations using LLMs. The open letter calling for pause of AI development, signed by Elon Musk and Steve Wozniak among many others, is an example of a call for such a general ban, albeit on a temporary basis – although the authors are sceptical that any such general ban will ever be enacted. Italy’s temporary ban of ChatGPT this April is also an illustration of how regulators may (at least temporarily) put a halt to the implementation of LLM technology. In terms of more enduring measures, the EU’s draft AI Act contains a number of “prohibited AI practices”, for which AI systems, including LLMs, may not be used. The draft AI Act also imposes various obligations on the providers of “foundation models”, which includes LLMs, as well as on various “high-risk” use cases. Businesses wishing to deploy LLMs would therefore need to ensure compliance with these requirements, and compliance will inevitably require some additional labour. In the same way that many businesses now have dedicated data protection any privacy roles to manage GDPR and other data protection and privacy regulatory matters, dedicated AI regulatory risk roles are easy to foresee.
Secondly, the implementation of LLMs in certain sectors will be affected by the pre-existing, non-AI-specific regulation in that industry. In an appendix to “GPTs are GPTs” the authors of the paper set out a graph breaking down various categories of tasks currently performed by the US workforce, and mapping the exposure of each of these tasks to LLMs. Many of the most exposed tasks are associated with white collar roles, including those in the financial services sector. However, financial services is also a highly regulated industry. For example, in the UK, the Senior Managers Regime aims to enhance the individual responsibility of senior managers in financial services firms. In such circumstances, the leadership of the businesses operating in financial services may be more cautious about adopting LLM technology without a retained staff to provide close oversight, due to the potential consequences in case of system malfunction.
Finally, LLM technology may be adopted not only by businesses in various industries, but also by the regulators of those industries. In such a case, the increases in efficiency from LLMs may contribute to a hike in the pace and volume of regulation across sectors. This in turn would tend to encourage businesses to commit additional resources to ensure compliance with regulations, generating increased need for, among other things, more ‘humans in the loop’ for this purpose.
Internal Oversight and Accountability
Separate from external regulatory pressures impacting on the adoption of LLMs, the implementation of LLMs is likely to be slowed by the need for businesses to ensure that effective internal oversight and accountability procedures are in place. Resources will be required to set up and administer the new processes, and the development, testing and approval of the relevant policies and procedures will take time. Consideration of the particular characteristics of various areas of business within which LLM technology is desired to be deployed will be necessary, as will assessment of how effective oversight procedures can be embedded within existing organisational and reporting structures.
For large organisations, this is likely to result in asymmetry in the deployment of LLM technology for various use cases, resulting in any impact on the company’s workforce taking place in a staged manner over time. Taking an optimistic view, we can see an advantage to such a scenario in providing potential redeployment opportunities within the organisation.
For industries with significant fixed capital investment, such as manufacturing, the pace of implementation of AI technology will need to take into account the rate of depreciation of the business’s existing fixed capital investments. A car manufacturer that has recently invested in a new factory is less likely to replace the equipment in that factory with new AI-powered technology than in a factory, the equipment of which has fully depreciated. Such considerations would weigh towards AI deployment that is spread out over time, both within industries and within particular businesses operating from multiple sites. Any labour-saving impact would therefore materialise gradually, potentially allowing time for adjustment within the economy to absorb any workforce displaced by AI technologies requiring fewer workers to be involved in production.
Workforce, PR and Ethical Considerations
To the extent that the implementation of LLM technology in a particular business may result in a rapid and substantial reduction of headcount within the company, the business may choose to implement technology more slowly in order to preserve roles. This would have the benefits of increased likelihood of keeping the workforce as a whole on board with the LLM-induced changes, facilitating a smoother transition process overall. Such an approach would also assist with public relations, since substantial layoffs can attract negative press coverage. Some organisations may also see a moral or ethical dimension to such decisions, wishing to honour the loyalty shown by employees, particularly those with long service. Others might make a virtue out of human skill being involved in production – some producers already highlight that their products are hand-finished by artisans to distinguish products from competitors who rely wholly on automation.
When considering their approach to implementing LLM technology, employers will also have to take into account the extent of organisation among their workforce. Although trade union membership has been falling across many countries since the end of the last century, it remains relatively high in several countries, particularly in the Nordics. For example, in Sweden in 2022, union density stood at 59% among blue-collar workers, and 73% among white-collar workers. Even in countries such as the UK, where average trade union membership was at a low 22.3% of the total workforce in 2022, some industries are significantly more unionised than others. For example, union density in the education sector stood at 46.9% in 2022. Employers with highly unionised workforces are more likely to face resistance to implementation of LLM technology that would cause a significant sudden reduction in headcount, and will need to more carefully negotiate any transition to a less labour-intensive work process with their employees.
Additionally, in countries that provide formalised rules around works councils, such as France and Germany, any consideration of implementation of LLM technology would be subject to consultation procedures.
Any business considering implementing LLM technology will have to carefully assess the expected return on investment of such implementation, weighing the costs and benefits of investing in AI-driven change. A thorough cost-benefit analysis is essential, as there is no guarantee that the introduction of LLMs will deliver the expected efficiencies, or be limited to the anticipated costs. A 2021 survey conducted by McKinsey found that 51% of companies not classed as “AI high performers” (“AI high performers” meaning organisations for which at least 20% of earnings before interest and tax were attributable to AI use) found the cost of AI model production to exceed the expected amounts. While enthusiasm to press forward with LLM-related projects is understandable, organisations, particularly those not experienced in AI use, should ensure that thorough analysis of the expected return on investment is carried out.
This analysis is likely to be influenced by the market pressures experienced by a particular business. Companies operating in markets with reasonable and functioning levels of competition are more likely to be subject to competitive pressures that could produce strong incentives to implement LLM technology. By contrast, effective monopolies and oligopolies are likely to experience less external pressure driving such change.
Changing Market Expectations
As stated above, the authors of “GPTs are GPTs” find that between 47% and 56% of all US worker tasks could be completed significantly faster at the same level of quality with the help of LLM-based technology. However, the capabilities unlocked by LLMs could also stimulate demand for increased productivity and quality, as well as a change in the nature of outputs of production.
There are many examples of technological advances driving increases in productivity and quality in the past. To take one example from the legal industry, the advent of word processing software and email made it much easier for draft documents to be shared, edited and supplemented. However, rather than this resulting in less time being necessary to finalise a document, client and market expectations of what a finished contract should look like evolved to incorporate the increased capacity for productivity and quality. Contracts grew longer and contained ever-more detailed provisions and legal protections. A similar dynamic could follow the introduction of LLMs in a variety of industries.
Moreover, LLMs could facilitate a move away from standardised production and towards more personalised products and services. This could apply not only to sectors where personalisation could be essential for delivering a fundamentally better service, such as medicine and pharmaceuticals, but also to manufacturing and service delivery more generally, enabling companies to offer outputs tailored to individual customers and changing customer expectations by doing so.
The potential for changing market expectations along the lines described above illustrates that increases in efficiency driven by LLMs would not necessarily result in a requirement for reduced labour inputs in the production process. Instead, efficiencies could be substantially offset by these increased productivity requirements. It is possible for LLM technology to stimulate changes in demand leading to more, higher quality, more personalised outputs that fully utilise companies’ existing workforces.
Although there are valid concerns about the detrimental impact of LLMs on the labour market, there are also reasons for optimism. Factors external to LLMs themselves – regulation, internal company procedures, depreciation, workforce, PR and ethical considerations, and complexities regarding the commercial benefit of implementing LLMs – may all serve to make the deployment of AI in the workplace slower and more even than the mere capacities of the technology imply. Moreover, there is potential for LLMs to initiate changes in market expectations and stimulate demand. These factors suggest that, instead of AI rapidly replacing human labour, it may instead come to gradually augment it, leading to a better supported and more productive workforce.
It is highly likely that LLMs will indeed be able to do at least some part of your job. However, there are reasons to think that this may not happen immediately, or all at once, and that the possibilities unlocked by LLMs will mean that there will be more work for you to do as a result of their introduction. The forward march of AI is something to watch, but for many this should at this stage be a look of curiosity rather than one of apprehension.
You can find more views from the DLA Piper team on the topics of AI systems, regulation and the related legal issues on our blog, Technology’s Legal Edge.
If your organisation is deploying AI solutions, you can undertake a free maturity risk assessment using our AI Scorebox tool.