When an artist – whether they are a painter, writer, photographer, poet, etc. – creates a piece of work, they automatically own the copyright to it. This means they get to choose how that work of art can be used and, of course, get paid for it. But what happens when a piece of art is created by a computer?
This is a problem that we’ve only had to deal with in the last year – since Generative AI took the world by storm. Tools like ChatGPT can write stories, songs or plays, while Stable Diffusion or DALL-E 2 can produce images of anything we can describe to them.
But should the credit (and royalties) go to the person who used the tool to create the art or to the company that built the AI tool? Or to the humans whose original works were used to train the AI in the first place?
Because generative AI is so new, we simply don’t have an answer to this question yet. But a growing number of artists whose work has been "scraped" from the internet to train AI engines are launching legal action against AI companies, which they claim are profiting from their work without giving them credit or payment.
At stake are both compensation and control – with artists understandably believing they should have the right to decide how their work is used and to be financially compensated for its use. So let's take a look at some of the issues involved and some of the arguments that could have an impact on the future roles of both AI and art in society.
The question of who owns the rights to works of art created by AI will go before the courts, as individual artists as well as corporations, challenge the right of AI service providers to use their works without permission.
Artist Eva Toorent is one of the founder members of the European Guild for Artificial Intelligence Regulation (EGAIR). She believes that AI companies should have to obtain artists' opt-in consent before using their work to train algorithms that can create other works.
She recently told the BBC, “If I’m the owner, I should decide what happens to my art.”
Accusations of breach of copyright have also been made by Getty Images, which alleges that 12 million of its proprietary photographs were used by Stability AI, without its permission, to train its Stable Diffusion image generation tool.
It’s hard to deny the crux of their argument, which is that current IP and creative rights legal frameworks designed to protect artists from having their work used without permission or compensation are simply not adequate for the AI era.
As things stand, an artist’s work can simply be used again and again as a generative AI tool pumps out any number of cloned or derivative artworks. ChatGPT, for example, is capable of writing “in the style of” many human authors if instructed to do so – including myself.
Toorent, for example, says she realized the scale of the problem when she visited a gallery displaying AI artwork and was able to identify works that were based on her own style.
From a financial point of view, it seems clear from an ethical perspective that when AI companies stand to make enormous profits from this technology, artists whose talent contributed to their success deserve their cut. OpenAI, for example, attracted investment of $10 billion from Microsoft based on its potential to generate revenue in the future.
Inspiration vs. Plagiarism
There are a number of arguments that AI companies could put forward in order to get around their responsibility to artists.
One might be that training an AI system on human-created art is no different to a human artist taking inspiration from other human artists and going on to make their own art that reflects that influence.
But I would say that this doesn't quite hold up – a human artist taking inspiration from another human will – unless they are an outright plagiarist - add their own human qualities to come up with something that exhibits their own human creativity. A computer algorithm can't do that – it might mash it together with influences from other human artists, but it will never create anything that is truly new or original.
Or it could be argued that generative AI – and the huge potential for driving efficiency and growth that it promises - simply would not be possible without a huge amount of training data. And that obtaining permission for everything they use simply isn’t practical. There are some examples, however, of AI companies attempting to do just that. Adobe has exclusively trained its Firefly generative AI on images that it holds the rights to. It even offers to indemnify customers who use its tools against future claims against them.
Another argument might be that using an artist’s work to train an Ai algorithm doesn’t necessarily cause any harm or loss to that artist or devalue the original work. This argument will become harder to stand up if it transpires that artists are indeed losing out on work to AIs, though!
AI companies might try to go down the road of claiming that compensating everyone whose work is used might be too difficult due to the number of artists involved and the scale of the training datasets. It's possible that technology could provide a solution here, though, with AI and distributed ledger systems like blockchain potentially being used to create automated payment systems in the future.
The Future of Art and AI
I believe that generative AI has the potential to be truly transformative in many positive ways. It can make our day-to-day lives simpler, it can reduce the amount of time we spend on soul-destroying and repetitive tasks, and it can enable us to express creativity without being held back by technological barriers.
The positive impact this could have on society, however, would be offset if it also means creative people have to become accustomed to others taking their work and profiting from it - particularly if it can be used in ways that they may have moral or ethical objections to. Musicians, for example, frequently deny permission for their music to be used at political rallies. As things stand, there would be nothing to stop a politician from using AI to create music "in the style of" any particular band and using it as they see fit.
Upcoming legislation and court battles may go some way toward establishing a framework that will allow us to govern and regulate these matters in a more satisfactory way. Certainly, any legislated change is more likely to favor the artists, who are inherently disadvantaged by the current status quo.
But of course, money talks – and we can expect the AI providers – such as global tech giants Google, Microsoft and Meta – to strongly oppose any changes that will inhibit their ability to innovate (and make money).
All that’s certain right now is that as we explore this new intersection of art and technology, we are in uncharted territory. Finding a solution that keeps everyone happy without compromising artistic creativity or technological innovation will involve finding answers to some of the core ethical questions around AI and its place in society. In the long term, this is likely to be a good thing for all of us.