Artificial intelligence performs well when it learns from us. But, what if it doesn’t learn from an honest representation of us? This is the question that people have been seeing answered in the results that AI companies now produce. Chat GPT has manually set guidelines for a reason: without those restrictions, the bot creates controversial results. Dall-E-2 has been criticized for creating movie posters that carry extreme social bias, often shown on TikTok and other social media platforms.
However, these problems run deeper. Any AI model that generates biased results by default “subconsciously” weaves subtler, less noticeable biases into most of what it does. This becomes jarringly obvious in the art world, where image-generative AI creates designs that inaccurately and oppressively represent different demographics of people. MIT Technology Review states that, “Bias and stereotyping are still huge problems for systems like DALL-E 2 and Stable Diffusion, despite companies’ attempts to fix it.” However, before we explore how this affects us all, it is crucial to understand why this happens. The answer starts with data collection.
An AI model’s entire purpose, in any scenario, is to predict an output, “Y,” based on a hypothetical “X.” This “X” does not actually exist. In order for an AI to do this, it must first learn the outputs for “X”s that do exist. These real-life “X”s, or inputs, are parts of datasets that AI models receive. While computers work mathematically and logically, the people feeding datasets to AI models are programmers, data scientists, and people who work in other related fields. Therefore, people can absolutely be biased.
So, oftentimes, when people collect data to give AI models for training, they may subconsciously collect data samples that don’t accurately represent the human population. For example, if an AI CNN (Convolutional Neural Network) mostly receives images of white men, but its purpose is to recognize the faces of anyone who wants to log into a device or website, the AI will work much better for white men than it will for other people. As Chapman University puts it, “The AI algorithm might produce biased outputs if the data is not diverse or representative.”
Accordingly, abundant evidence exists that AI facial recognition today is racially biased. For example, the University at Buffalo reports that, “When researchers in the 2018 Gender Shades Study for IBM and Microsoft dug deeper into the behaviors of these [facial recognition] algorithms across various systems, they found the lowest accuracy scores were obtained for Black female subjects between 18 and 30 years of age.” Similarly, the University of Calgary reports that some facial recognition technology has as much as 99 percent accuracy in recognizing white male faces, compared to an accuracy rate of 65 percent with non-white female faces.
This greatly impacts the world of visual art. Ms. Kathryn Tucker, Lake Highland’s photography teacher, says, “Photography has expanded rapidly since the introduction of digital imaging in the mid-1970s and has accelerated along with AI tools.” With biased visual data in creating faces, AI’s depiction of human faces now carries heavy demographical biases. Its capacity for generating visual art has already taken over countless marketing jobs due to its ability to create art. Ms. Tucker’s advice to students about handling the impact of AI in the future is: “Your creative voice as an artist is your commodity. It has unlimited value.”
This is important as the advancement of AI continues to skew future prospects or perceptions for students interested in art. Brianna Yoskin, grade 12, a photography student at LHP, says she likes that photography, “Sometimes helps you to slow down and see the small details or patterns that you don’t usually see, but also allows you to capture moments that pass in an instant, like the view out of a plane window or someone’s expression.” However, she also says that AI, “Probably will continue to change the game in terms of editing and processing photos, as it has already created tools like generative fill and stuff that have already become very popular.”
So, with AI having such a heavy impact on visual arts, something that people value so highly, it’s important to ensure that it trains on the right path without excluding certain groups of people. That doesn’t mean consequences have to fall on programmers, however. In fact, many of the biases built into AI can be accidental or resultative of already-biased circumstances. Ashna Maathur, grade 12, a computer science student at LHP, says she thinks, “AI has allowed for more thought on creativity and innovative ideas in Computer Science.” Renessa Ghosh, grade 12, a former computer science student at LHP, says that her favorite part of computer science class has been, “Making projects that (sort of) have real-life applications.”
By fostering such a healthy learning environment and a positive attitude towards artificial intelligence, LHP helps ensure that future software developers have a constructive approach toward AI. This stresses the importance of considering all factors for the bias that exists in AI today. In the instance of Chat GPT, the official Open AI website says, “When you [users] use our services for individuals such as ChatGPT or DALL-E, we may use your content to train our models.” Because people can ask ChatGPT or DALL-E anything they want, this creates bias.
Therefore, AI generative models need more than just manually inputted guidelines to guard against extreme bias. AI needs people to actively work to ensure that the machines of our future are evolving into unbiased tools. This involves everyone, not just software developers, who are interested in an inclusive, progressive future. Kathleen Forster, Lake Highland’s art teacher, says students should, “Bravely trust themselves to become the artists they want to be, and cautiously use sources of inspiration that maintain their artistic integrity.”