New York: A group of researchers have found that recommendation algorithms on social media platforms may discriminate on the basis of gender, age and race.
According to researchers from Columbia University School of Engineering and Applied Science, social media and the sharing economy have created new opportunities by leveraging online networks to build trust and remove marketplace barriers.
Their research suggested that old gender and racial biases persist — from men’s greater popularity on Twitter to African Americans’ lower acceptance rates on Airbnb.
They used photo-sharing site Instagram as a test case and demonstrated how two common recommendation algorithms amplify a network effect known as homophily in which similar or like-minded people cluster together.
“We are simply showing how certain algorithms pick up patterns in the data. This becomes a problem when information spreading through the network is a job ad or other opportunity,” the study’s lead author Ana-Andreea Stoica said in a university statement.
The team showed how algorithms turned loose on a network with homophily effectively make women less visible. They found that the women in their dataset — whose photos were slightly less likely to be ‘liked’ or commented on — became even less popular once recommendation algorithms were introduced.
“Algorithms may put women at an even greater disadvantage,” Stoica added.
The researchers scraped their data from Instagram in 2014, after Facebook bought the company but before automated prompts made it easier to connect with friends-of-friends.
Though women outnumbered men in their sample of 550,000 Instagram users (54 per cent to 46 per cent), the researchers found that men’s photos tended to be better received — 52 per cent of men received at least 10 ‘likes’ or comments compared to 48 per cent of women.
The researchers found that men were 1.2 times more likely to ‘like’ or comment on other men’s photos rather than women’s, while women were just 1.1 times more likely to engage with other women.
“Algorithms pick up subtle patterns and amplify them. We’re not asking that algorithms be blind to the data, just that they correct their own tendency to magnify the bias already there,” said the study’s senior author Augustin Chaintreau.