Artificial intelligence has been used to help explore new disease treatments, to help product teams analyze and pull obscure insights out of the data they've compiled, and more. It can be an incredibly useful tool. I've written in the past about how when I was sick with COVID I explored creating a game with ChatGPT's help. While it was fun to ask the robot pal, it ultimately didn't sit right with me. As an artist, trained illustrator, and designer, I know how hard it can be to get creative work. I also know how enjoyable and emotional creating that type of work can be, but also how important it can be that it's informed by the culture around us. Things AI should stay out of.
I hope that even posing these questions causes your brain to spin like mine. I'll pepper some more mind-spinning questions throughout, below. The way AI companies blatantly infringe on copywritten work and creators' unique voices to train their models on says a lot about how much we understand and value the arts and their contribution to society. Throughout history, entire art styles and techniques have been created and used just to push on the cultural zeitgeist. The loss of human touch and emotion in creative works is a real risk with these types of advancements.
Who should receive credit and compensation for AI-generated creative works? There currently doesn't exist any Spotify-esque model that's compensating original creators the more you use an AI tool to reference their work. AI tools may facilitate the creation process, but human input is required to train and guide AI systems. Without proper attribution, compensation, and moral paramters in place all we've arrived at is exploitation. Exploitation that only further diminishes how creative contributions from artists, designers, illustrators, writers, and others are perceived.
Somewhat ironically, it's not just the creators that AI is being trained on that are being exploited, but also the very developers of AI technology in some cases. Noēma broke a story on this very topic, where they reported "So-called AI systems are fueled by millions of underpaid workers around the world, performing repetitive tasks under precarious labor conditions." And SFU's student-led paper reported "as little as $1.46/hour after tax." It seems like the entire point of AI is profit and exploitation. Sadly, that's not the only type of exploitation being reported by SFU, Noēma, and others. With AI datasets coming from an imperfect public, the AIs using them have been creating sexualized images, perpetuating things like harmful stereotypes, racism, infantilization, violence, and more. They spread our hate.
We have the devaluation of human creativity and craftsmanship occurring, and the exploitation of both creators and developers. When we start to displace creators from their own fields, what happens? I think there's going to be a tremendous effect on mental health and well-being of creators, for starters. AI will also take the jobs of creators, or perpetuate the existing power dynamic problems within creative industries. Since we've allowed AI companies to approach things in such a manner we'll have also opened the doors for the same thing to happen to other industries.
Displacing creators and causing them further problems isn't all we have to contend with, though. Where does all the money go? The money that would usually go to adequately compensating creators is instead being concetrated in AI corporations– most notable, their leadership, of course. Because why pay anyone involved in exploitation fairly?
I encourage folks to check out even just this one article from Earth.org. They outline so many facets of this topic, like the chips AI servers require, the water usage of the servers, the carbon dioxide output, and more.
Many will argue that we need innovation and advancement and that these points and questions are merely roadblocks they have to break down in order to plough forward. We've started to see some creative communities like DeviantArt and Tumblr selling their users' data and creative work– allowing us to opt out if we don't want to take part. Money and poor ethics on their part might be why organizations like theirs can't instead use opt-in and compensate users' who want to do so. If you want to opt into helping train AI, all the power to you, but where is the compensation, and what about those who can't opt out? Many people have forgotten their accounts or have even passed away– what happens to their work when they don't opt-out? These companies and their beneficiaries don't care.
I think we can do better while still building for the future.
A few things AI companies could do better:
Which, of course, makes me wonder things like...
I, personally, pledge that I will not use any exploitive AI tools to create games and assets for Monkey's Lunch. Items mentioned in my previous post will be scrapped entirely and revisitted.
I'm no lawyer or anything, but I have been slowly writing a book on the ethical creation of products and services, which has helped me create and pledge the below. I encourage you to take the following pledge with me, in whole or in part:
While I recognize the near impossibility to avoid AI technology in every setting, as a conscientious individual, I recognize the profound ethical, societal, and environmental implications of artificial intelligence (AI) technologies. In alignment with my values and commitment to ethical conduct, I pledge to refrain from using or supporting exploitive AI, including but not limited to:
By taking this pledge, I affirm my commitment to ethical conduct, social responsibility, and the well-being of present and future generations. I recognize that my individual actions contribute to shaping the ethical trajectory of AI technologies and their impact on society, and I pledge to stand against exploitive and unethical technologies in society at large.
One thing I've found particularly interesting as a means of rebelling against AI companies or those who create work using them is to openly disclose when you've made something without AI. Folks like Hinokodo have provided asset packs of emblems, badges, or stamps to emblazon your work with and let people know "Made By Human." Of course, I would also prefer if those who use AI dislcosed things in a similar manner.
If you're working on something and you want help, please, please, please hire.
I've been trying to hire via UpWork and having trouble finding a good fit, but I've found the best way to find folks to work with to just be via social media. I recently hired Hodag to help with the art for Bug & Claw and it was so refreshing to see their take on the little post-apocalyptic world I've been creating. I plan on hiring them again, soon, once I have more budget alotted! I followed them for a long time on Twitter (I will not call it X, sorry), before reaching out and it was that easy.
I tried to hire someone via Fiverr after someone recommended it (I honestly thought it was only for things that cost $5, which I thought was super-exploitive, but that's not the case anymore). You'll never guess where I ran into someone using AI. So, fair warning, when trying to hire on sites like that it seems like we have to disclose now that we won't accept work made in whole or part by AI. Honestly kind of blew my mind that was a thing I had to define, but that's where we are.
An argument I keep hearing from people is "but I can't afford it right now." To which I reply: there are other options.
AI is exploitive, currently bad for the planet and society, and devalues creative people and their work, and there are other things we can do instead of using AI. We also need to hold AI companies and their patrons to account. Innovation and advancement are inevitable, but that doesn't mean they can't come with proper ethical consideration and while avoiding exploitation.