Skip to Main Content

Artificial Intelligence and Images

Ethical Issues

AI generated images present important ethical issues. 

  • AI models are trained on copyrighted images, without attribution or acknowledgment of the original source, although some stock image companies have begun training their models with only licensed images.
  • In social media, the AI-training models reflect our social biases, sometimes through over- or under-representation of certain others.
  • Lensa, an AI-powered app, has been criticized for portraits that may invoke racist and sexist stereotypes. (The viral AI avatar app Lensa undressed me—without my consent)
    • When Melissa Heikkilä used Lensa to create her avatars, she stated that "my avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors." 
    • The internet teems with images reflecting these stereotypes. Lensa uses Stable Diffusion to generate its images, which relies on the open-source data set LAION-5B, thus reflecting the stereotypical images.
    • Other common depictions of women or people of color include lightening of skin, and thinner, younger appearances.
  • The Washington Post article, This is how AI image generators see the world, consists entirely of AI-generated images, demonstrating the biases of image generators such as Stable Diffusion and DALL-E with prompts and results. For example: the prompt "Attractive people" are young and light-skinned, while the prompt "Muslim people" are men with head coverings. 
  • Misinformation becomes easier to spread due to the realistic imagery created by AI. The Coalition for Content Provenance and Authenticity is developing a potential way to address this issue, by finding the means to provide context and history for digital media and authenticate images and videos are they are recorded.

Remember - STYLE of an artist is not copyrightable, so anyone can produce works in the STYLE of another as long as the content is new or transformed.

Reflections:

  • AI generators use pre-existing images to create a requested image. So is it original or is everything created by AI a derivative work?
  • How do you give attribution to an artist whose works have been found through AI generators?
  • Think about the 4 Factors of Fair Use:
    • 4-Factors 
      • Purpose of the use
      • Nature of the copyrighted material
      • Amount used
      • Effect on the market
    • Should AI-generated images be for personal use only?
    • Is is ethical to use work that has undefined origins?
    • How is it possible to know the amount of an image that has been used?
    • How does this impact the market value of artists whose work is training datasets?

 

ArsTechnica Have AI image generators assimilated your art? New tool lets you check

Lawsuits

Lawsuits are being filed against AI image companies. 

Matthew Butterick, lawyer: Stable Diffusion Frivolous

These artists have filed a class-action lawsuit against the use of Stable Diffusion and what Matthew Butterick calls "a 21st-sentury collage tool that remixes the copyrighted works of millions of artists whose work was used as training data." 

  • Artists plaintiffs suing Stability AI, DeviantArt, Midjourney for use of Stable Diffusion
    • Sarah Andersen, cartoonist and illustrator
    • Kelly McKernan, watercolor and acryla artist, illustrator for games, books, comics
    • Karla Ortiz, concept artist and illustrator for film, television, video-game industry, such as Marvel
  • Images by these artists were used for training Stable Diffusion software without knowledge or consent of the artist.

Artnet News: Getty Images Is Suing the Company Behind Stable Diffusion, Saying the A.I. Generator Illegally Scraped Its Content

  • Getty Images, a stock image platform, claims that Stability AI scraped millions of Getty Images and metadata without requesting licensing.  Statement from Getty

Paste: AI Art Generators Face Legal Challenges As Their Ethical Shortfalls Continue To Surface

  • One of the image sets used to train AI art models is the LAION-5B image set, a collection of 5.8 billion images. After personal medical record photos were found by an artist, Stability AI will allow artists to remove their work from the training dataset in the Stable Diffusion 3.0 release.

BBC News: New AI systems collide with copyright law

Artists' are finding their own work has been used to train AI models, such as LAION. LAION feeds image generators such as Stable Diffusion.