AI; Ignorance of the law will be no defence for your business and no excuse for your investors


I am a lawyer. Give me a set of rules. Give me a problem. Understand the problem. Apply former to the latter. Understand the client, the business, the industry and the technology. Horizon scan, be commercial, flex communication style. Deliver. I am good at it, and I enjoy it.

But A.I.? I find advising clients on A.I. unusually perplexing.

It isn’t the technical complexity. I have a technical background. An engineer turned lawyer. A history of patent litigation, but also experienced in software, copyright, open source, cyber-security. And I need to understand something before I feel qualified to advise on it. Clients pay for expertise. Call me old-fashioned.

But technically how to understand A.I.? Or generative AI (which is what these days many mean)? Well, I thank Stephen Wolfram for his exposition of large language models and the relevance of, for example, VAEs (variational autoencoders). For me, the detail is necessary. For example, a fundamental question is whether a trained model is, in and of itself, a copy of the works on which it is trained. To legally unpick this question requires an understanding of what is actually happening. So, technology… check? Yes, but only kind of, because it moves quickly. 6 months later? Inference. Adapters. Grounding. Multihopping. Advisors need to do more than simply post LinkedIn soundbites – these issues require proper research, technical understanding and then thoughtful analysis.

The law? In the UK, is the scope of what is and is not copyright infringement clear for the purposes of AI systems? There is a text and data mining exception to copyright infringement which would, in theory, have wide application to how AI systems are trained, but it is very narrow and does not apply to commercial activities. There are other exceptions to copyright infringement which could apply but they too are narrow, and their application would, mostly, depend on their use (e.g. in education). In the EU there is change in the EU AI Act. And there is also the Digital Copyright Directive, which introduced text and data mining exceptions and where copyright holders must pro-actively ‘opt-out’ so as not to lose the value of their rights, but where the technical implementation of such a mechanism is far from clear. And there are transparency requirements in the EU, the scope of which are also unclear. It is an exception, which the UK government has considered, backed away from and now considers again (along with an EU-style opt-out) in its February-closed consultation, but many of the same concerns that are live in the EU would apply equally in the UK. Separately there is the Data (Use and Access) Bill which is still being debated and has proposed rather more stringent copyright protections from an extra jurisdictional and a transparency perspective. Rather than an EU-style exception, some believe collective licensing models will prevail, but they have their own technical hurdles.

There is a thick political overlay too – the UK government presently believes it needs to attract AI developers to the UK, but what of its own creative industries? On the same day it closed its consultation, the UK government said it will delay plans to regulate AI in order to align with the Trump administration, while the creative community published their own objections on the front pages of the UK broad sheet newspapers. Meantime tech companies have their own ideas.

The US has been the origin for most of the global LLMs and foundation models. It is also, therefore, home to most of the litigation, with Meta, Microsoft, OpenAI, Google and Nvidia all subject to claims of copyright infringement from various copyright holders, from authors to media companies, in any number of creative industries, from music and publishing to software. There is a school of thought that AI developer companies should be able to rely on a US defence to copyright infringement called fair dealing. But whether that is correct and how far such a defence, if applicable, would extend, is far from being settled.

As in the US, the UK has a globally respected legal system. Here, Getty is suing Stability AI for infringing its copyright in the US and the UK because of how the Stability Diffusion product has used its images. It’s an important case and addresses whether specific acts are problematic, such as training the AI, generating the outputs, and the status AI system itself. It raises commercialisation issues, such as who has downloaded what, where and what has technically happened and where. Getty feels aggrieved and Stability AI justifies its actions with various arguments, including in respective of jurisdiction and some novel points involving exceptions such as for “pastiche”. Whatever the outcome of the trial in June, this case is likely to be appealed. Perhaps twice. Over many years. Ultimately, though, it could decide what aspects of AI systems are legal in the UK.

Conclusion? There is significant global uncertainty surrounding the legality of AI systems, at least from a copyright perspective. And the implications touch politics, the law and technology. And that is before one addresses issues relating AI and data privacy compliance, itself a complex topic to navigate and one which is firmly in the crosshairs of regulators. And all of this uncertainty is likely to continue for some time.

From the various tech conferences I have attended, it is not uncommon for start-ups and entrepreneurs to be striding ahead without an appreciation of what could be existential risks to their businesses. Canny investors will be cognisant of these risks and wary of target business models that do not account for uncertainties or try to mitigate risks. By contrast, canny businesses seeking investment would do well to be informed, address these risks head on, and seize the opportunity to distinguish themselves from those competitors that are unaware and unprepared for what lies ahead.

Leave a Reply