AI & TechArtificial IntelligenceBusinessNewswireTechnologyWhat's Buzzing

Study: ChatGPT suggests women request lower salaries

▼ Summary

– A study found that large language models (LLMs) like ChatGPT suggest lower salaries for women than men with identical qualifications.
– The research, led by Ivan Yamshchikov, tested five popular LLMs using gender-differentiated prompts but identical job profiles.
– The salary gap was most significant in law and medicine, with near-identical advice only in social sciences.
– The models also showed gender bias in career advice, goal-setting, and behavioral tips without acknowledging the bias.
– Researchers argue technical fixes aren’t enough, calling for ethical standards, independent reviews, and transparency in AI development.

New research reveals that AI chatbots like ChatGPT tend to recommend significantly lower salaries for women compared to men, even when their qualifications are identical. The study highlights how artificial intelligence systems may unintentionally perpetuate gender bias in professional settings, particularly in high-paying fields.

A team led by Ivan Yamshchikov, an AI and robotics professor at Germany’s Technical University of Würzburg-Schweinfurt, tested five leading language models by presenting them with hypothetical job candidates. The profiles were identical in every way, education, experience, and job role, except for gender. When asked to suggest appropriate salary figures for negotiations, the models consistently proposed higher amounts for male applicants.

For example, ChatGPT’s recommendations showed a staggering $120,000 annual pay gap between male and female candidates in certain scenarios. The disparity was most evident in law and medicine, followed by business and engineering. Only in social sciences did the AI provide nearly identical salary advice for both genders.

Beyond compensation, the study also examined how these models guided users on career decisions, goal-setting, and behavioral strategies. Across all areas, the responses varied based on gender, despite identical input prompts. Notably, the AI systems failed to acknowledge or disclose these biases in their suggestions.

This isn’t the first instance of AI amplifying societal prejudices. In 2018, Amazon abandoned an AI recruiting tool after discovering it penalized female applicants. Similarly, a healthcare algorithm designed to diagnose medical conditions was found to underdiagnose women and Black patients due to training data skewed toward white men.

The researchers emphasize that technical adjustments alone won’t resolve these issues. They call for stricter ethical guidelines, independent audits, and greater transparency in AI development to prevent biased outcomes. As generative AI increasingly influences career advice, financial decisions, and even mental health support, unchecked biases could reinforce harmful disparities under the guise of neutrality. Without intervention, the perception of AI as an impartial tool may become one of its most misleading, and damaging, characteristics.

(Source: The Next Web)

Topics

gender bias ai 95% salary disparities 90% ai professional settings 85% Ethical AI Development 80% AI Transparency 75% impact ai career advice 70% historical ai biases 65%
Show More

The Wiz

Wiz Consults, home of the Internet is led by "the twins", Wajdi & Karim, experienced professionals who are passionate about helping businesses succeed in the digital world. With over 20 years of experience in the industry, they specialize in digital publishing and marketing, and have a proven track record of delivering results for their clients.
Close

Adblock Detected

We noticed you're using an ad blocker. To continue enjoying our content and support our work, please consider disabling your ad blocker for this site. Ads help keep our content free and accessible. Thank you for your understanding!