Lately, there’s been a lot of discourse about the impact of AI art and writing, and I’ll admit that I feel a bit of trepidation about the abilities of recently-unveiled artificial intelligence projects.
There are so many possibilities to fear—its ability to replace creative workers, the hollowness of its art, the rampant intellectual property theft. I’m not that interested in wading into the debate about how human-like it is (this doesn’t ultimately matter to me, because even when it isn’t passable as human-created today, it will only get better with time—and probably not much time). I’m not interested in arguing about whether or not AI-generated art is art (we will not reach a society-level consensus on that question). I’m not interested in lambasting it as dangerous or freaking out about how it will destroy society (most people who make arguments like that, in time, look foolish, and I will not become one of them; the future is always weirder than we think it will be).
What seems clear to me: AI art will change the landscape of creativity; there are legal questions about it that should be discussed; it will only continue to become more influential with time.
But this conversation is missing the fact that AI has already deeply infiltrated one kind of art, and I think it is an illustrative example. Notably, neural networks have dominated in translation tasks (like Google Translate) since 2016. This has been a part of our lives for years now, and largely a useful one. Who doesn’t turn to Google Translate sometimes? It’s very helpful! But translators do not just regurgitate words in one language based on another, and even with the remarkable quality of translations that come from computers, human translators have not all lost their jobs.
I think this is a useful parallel to contemplate as we look at machine-generated art and writing.
(Note that as a machine learning researcher, I’m pretty touchy with the words “artificial intelligence”. I know we call GPT3 “an AI” and I know that Stable Diffusion images are called “AI art” but I just can’t do it. I’m going to say “machine generated” or “computer generated” most of the time because AI just feels weird to me. I’m not saying AI is the wrong word, just that it feels weird!)
For machine translation, there were (human!) decisions that went into compiling the training dataset, and tuning the model, and figuring out how to penalize mistakes. The “AI” itself is a human product, designed by human actors.
And all of those choices were based on many, many millions of tiny decisions that human translators made when translating between the target languages, and that human authors made when writing in each of the languages, and that human engineers made when writing the code that the machine learning researchers used. Even machine-based translation is a creative act by a lot of humans. However, because it’s many humans instead of one (and because they are all disconnected from each other in space and time), a lot of the beauty and nuance gets washed away. The product at the end is mostly serviceable largely because it is based on an aggregate of so many people’s work over so much time, not in spite of human influence.
But it cannot make the human judgements that speak to a specific place and time, a specific perspective. Good translation involves trying to recapture opinions and connotations and even rhyme schemes and rhythm. Machine translation doesn’t do any of that. And it’s the same with machine writing and machine art—it can be useful, it can fill a role, it can even be moving and beautiful. But unless we are training specific machine learning models (“making specific AIs”) that are not aiming to be broad but rather particular, to have a certain viewpoint and opinion and intention, these models will not replace people (though, see the link above–sometimes models are trained to pretend to be a specific person). I’ve seen people refer to this as art having a soul. I don’t know that I’d use that language, but I think what they mean is that art has an artist. Machine translation or writing or art does not have an artist—it has thousands. And it’s thus lacking the viewpoint or vision or intention of any of them.
What makes people interesting—and what makes our art interesting—is not where it is general, where it has digested the culture that raised its creator and merged together a thousand different ideas, but where it has diverged.
A machine could do that. Maybe someday it will. But these ones don’t.
There is so much conversation about the dangers of the AI-ification of art, including visual art and written language, but I don’t see as much of that for translation. This is sad, but also illuminating. We don’t think of translation as art, even though it is: translation is artistic and creative and opinionated just like other writing—and, by extension, other art—and it’s that opinion that AI doesn’t even attempt to copy. It’s that opinion that makes it human, and that opinion that makes it unique.
So AI maybe could be a threat to all kinds of creative endeavors, but only if we let it be. Machine translation fills a valuable role, like when I need to quickly look up a word while listening to a Spanish-language audiobook, but I always end up with a much better understanding when I bring that word up to my Spanish teacher. I think the same is true for writing and visual art. AI can be quick or simple or even beautiful, but it misses the innate humanity at the center of so many things we do. Machine text can summarize or write generic descriptions; it can fill out a blog post that regurgitates ideas. This is powerful. This is probably a threat to many jobs. It raises issues of plagiarism and copyright infringement and opens all kinds of questions that are worth discussing.
But it isn’t human, even when it pretends to be. It doesn’t try to speak from a specific perspective. And it shouldn’t, because it can’t.