Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Wikimedia Commons

Apple’s Image Playground Has Racial Bias Issues

Apple’s new image generation app, Image Playground, has been found to have racial biases. Research by Jochem Gietema, a machine learning scientist, shows that the app struggles to accurately represent skin tones and hair textures, especially for people with darker skin. These issues exist even though the app was designed to avoid problems like deepfakes using illustrated face styles.

Recommended Videos

The biases go beyond just representation. The app changes facial features and skin tones based on the words used in descriptions. For example, terms related to wealth—like “rich” or “investment banker”—result in lighter skin tones in the generated images.

Conversely, words like “poor” or “destitute” lead to darker skin tones. Similar patterns were noted with different themes, where descriptions linked to basketball or rap music often resulted in darker skin tones, while those connected to classical music or ballet produced lighter skin tones. This reinforces negative racial stereotypes.

Apple-Image-Playground-output-sports
Image: Jochem Gietema

These problems are not just limited to Apple’s app. Another research paper found that biases in AI-generated images are widespread across the industry. Apple has promoted itself as a leader in responsible AI development and has put in place various safety measures in Image Playground, such as banning the use of celebrity names and negative keywords. Despite these measures, the app still shows these harmful biases.

Related: Mexico To Sue Google Over Gulf of America Renaming

These findings are particularly troubling for Apple, given its public image and commitment to ethical AI. The company has shown intent to tackle bias issues, as evidenced by recent research on gender bias in language models. However, this situation highlights the difficulties tech companies face in reducing bias in AI, even with careful design and precautions.

This incident adds to the growing concern about bias in AI, following similar issues with other major tech firms. While correcting algorithmic bias is easier than changing human prejudice, it requires continued effort and investment from developers.

Source: RPP


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Jorge Aguilar
Jorge Aguilar
Aggy has worked for multiple sites as a writer and editor, and has been a managing editor for sites that have millions of views a month. He's been the Lead of Social Content for a site garnering millions of views a month, and co owns multiple successful social media channels, including a Gaming news TikTok, and a Facebook Fortnite page with over 600k followers. His work includes Dot Esports, Try Hard Guides, PC Invasion, Pro Game Guides, Android Police, N4G, WePC, Sportskeeda, and GFinity Esports. He has also published two games under Tales and is currently working on one with Choice of Games. He has written and illustrated a number of books, including for children, and has a comic under his belt. He writes about many things for Attack of the Fanboy.