AI for promoting diversity: Emotional Learning



An April 2019 Research paper titled, ‘DISCRIMINATING SYSTEMS Gender, Race, and Power in AI’, states that “women comprise only 15% of AI research staff at Facebook and 10% at Google.” It also states that “There is no public data on trans workers or other gender minorities.” Extending the diversity statistics to include race, the paper also notes that “only 2.5% of Google's workforce is black, while Facebook and Microsoft are each at 4%”. While talking about authoring papers and heading research work, the paper shows that “just 18% of Machine Learning researchers and paper authors were women.” These statistics are so much worse than the gender gap in STEM as a whole where on average, around 30% of the world’s researchers are women.



This lack of diversity might seem like an issue for the companies and for the research community for their present and future employees/members. And one might think since the world is moving towards artificial intelligence and it’s AI that has the power to make decisions, that too faster than humans, the diversity bias might fade out in the future. As we use AI to select the best resumes out of a hiring pool, machines can’t be biased like humans, machines must look beyond gender, ethnicity, and background, machines must ONLY look at the most-deserving candidate. As we use face recognition AI for security, authorization and even some important government procedures, we should finally be able to strike the right diversity balance in the public sector, in identifying people, in immigration services, in banks, etc as machines can’t be discriminatory, machines can’t reject applications based solely on one’s race and ethnicity. It’s natural for any logical thinking person to believe that since humans are passing on their power of taking “intelligent” decisions to machines, AT LEAST now these would be free from human bias, free from the inherent preference given to men, especially to white men, free from the prejudice against women, the LGBTQIA+ community, devoid of all discrimination based on gender, sexuality, skin-colour, background, financial situations, etc.



Here’s the loophole that we can very casually oversee. It’s a very famous saying that Technology represents the society that created it. The AI that we build for tomorrow, is a reflection of the society of today. Humans have managed to transform their social hierarchy and biases to machines. We have achieved the impossible. We deserve some praises for that :)


Let me break it down for you, you don’t have to be a data scientist to see what’s wrong here. I can explain every scenario I mentioned above and how AI is essentially enshrining and in fact promoting the human biases.


1. A Resume-shortlisting algorithm made to be used to automate the hiring process and from a pool of several thousand applicants, pick out the top ones. What’s wrong here is this


A resume shortlisting model is fed resumes to learn from. These are from the companies selected candidates over the past 10 years that are already biased because the recruiters have selected fewer women, people of colour, and of diverse sexualities. It has just been so that white men have been given preference over other candidates by the company possibly because of the recruiter’s bias or because of the lack of competent candidates belonging to other backgrounds owing to the lack of resources for them. But in the last decade, times have changed, the competency among the candidates is soon becoming even irrespective of the background and now a machine is being used to shortlist them. But what is this machine trained on? Considering adjectives related to “he/him”, “men”, “American” as preferable over adjectives such as “she/her”, “women”, “African American”. This model is being trained by rewarding it for choosing a white male and being penalized for choosing someone of colour. Obviously this model will reject a competent candidate who may be a woman, a person of colour or a member of the LGBTQIA+ community who’s family did not provide them with the best resources or who couldn’t shine in their institutions of discrimination.


2. A face recognition algorithm used for identification of citizens, approving applications for loans, immigration services, for entry into offices or public places, etc.


A recent study has shown that the results of these algorithms are immensely influenced by age, gender, and ethnicity. The error rates are as much as 100 times higher in identifying an African American woman or an Asian woman than a middle-aged white man. A study by the MIT Media Lab found that Rekognition(the famous Amazon facial recognition software) performed worse when identifying an individual’s gender if they were female or darker-skinned. In tests led by MIT’s Joy Buolamwini, Rekognition made no mistakes when identifying the gender of lighter-skinned men, but it mistook women for men 19 percent of the time and mistook darker-skinned women for men 31 percent of the time. Similar results have been noticed in about 200 algorithms designed by tech companies that are being used globally. An algorithm misidentified a white man about 0.8% of the time which is an amazing accuracy but misidentified a dark-skinned woman about 35% of the time which is an insanely high error rate for an application that is as widespread as this. Another paper highlights that a commonly used public dataset of faces called Labeled Faces in the Wild, maintained by the University of Massachusetts, Amherst, has only 7 percent black faces and 22.5 percent female faces, thus making a classifier trained on these images less able to identify women and people of color.



3. Let’s take a look at a criminal-justice algorithm widely in use across the United States that identifies the recidivism potential of a person who’s in court for a petty offense.


This algorithm returns a mathematical probability of the person committing a crime based on which a judge can give a sentence, send them in detention, or rehab or whatever the judge deems fit. Now, this algorithm sounds like an intelligent application from the surface, but how true are it’s predictions? It was noticed that about 20% of the people that it predicted will commit a crime, actually did. This means that approximately 80% of the people sentenced because of this algorithm did not deserve their fate. Upon looking into this algorithm and it’s working, it was noticed that it was trained on the dataset of past recidivism criminals and took into account their gender, ethnicity, background, financial status, etc. It falsely flagged a dark-skinned man as having a higher probability of recidivism than a similar white-skinned man.


4. An example that’s more relatable to everyone, try looking up the phrases “working professional woman” and “working professional man” on Google Images and pay close attention to the search results.

Fig 1

Fig 2

Let’s analyze these results carefully, the number of women of colour in fig 1 is 3 out of 21.

Similarly, the number of men of colour in fig 2 is an appalling 1 out of 21.



If this still doesn’t put forth my point, let’s look at the “tag” options for refining the search results.





For women, some of the tags include “black”, “mature female”, “beautiful female”, “attractive”, “business woman” and for men, these tags include “middle aged”, “handsome”, “office”, “job”, “laptop”.

How many tags exactly are related to my search, I'll leave it up to you to decide. In a situation like this, it’s a lot of things that are at fault, Google’s algorithm for considering “black” as a separate tag for a “working professional woman” being the first but not the last. It’s the metadata of images that’s entered by the user uploading the image to the internet, it’s the “tag” provided by users viewing it, it’s also the people who considered these results to be valid and have been using and validating this search result. Also, the people who have been using these tags to view images of a “working professional woman”. It’s basically the human bias that has very subtly been transferred to the machine without the algorithm designers realizing it.

Google Images was conceived in response to what people most wanted to see. Maybe it hasn’t decided yet what we most need to see.

5. There have been experiments and attempts to create models that identify a person’s age, gender, ethnicity and even their sexuality using their faces and bodily features. What could possibly be anyone’s excuse to enter someone’s privacy without their consent? There was a model that was made to identify trans people before, after and during them going through hormone therapy. The dataset for this was obtained through youtube creators who wanted to share their story and it was their attempt to feel more confident in their skin. Their images were used without their consent. Most frighteningly, they effectively revive the practice of physiognomy, a long-debunked and infamously racist pseudoscience that used subtle differences in human faces and bone structure to justify discrimination. It should be assessed whether certain systems should be designed at all. Who green-lighted the development of a homosexuality classifier? What should the consequences be for doing so?


By now, you must be thinking about the cause of all this. Are the brightest brains of the world that are being employed by tech giants not taking into account the need for diversity. Well, probably not. The thing that needs to be carefully analyzed for any machine learning model is the training dataset. If the dataset of a facial recognition system consists of majorly white men, how can we expect it to identify women or people of colour?. If the dataset of a criminal justice system consists of majorly people of colour, it’s obvious that any new petty offender who’s a person of colour will be predicted as a serious recidivism offender.


We also need to address the way in which AI systems are built so that discrimination and bias are addressed. Experts suggest performing rigorous testing, trials, and auditing in sensitive domains, and expanding the field of bias research such that it also encompasses the social issues caused by the deployment of AI.


Without our intervention, a society that historically braces white supremacy, patriarchy and harmful assumptions about gender and sexuality will produce technology that enshrines those values. What data scientists can’t do is change the mindset of its users by explaining to them where they are horribly wrong, but what they can do is make our machines more diverse, and fair (which is increasingly far from the human brain at least right now). Data scientists have the opportunity to change how people think by changing how the machine controls the future. As search results change, people’s perceptions and biases are challenged, the users are forced to face a fair and diverse result and are compelled to modify their outlook.



Data scientists can’t change the present, but they can create AI that can massively revolutionize the future.

Depending on how AI is built, tested and deployed, it can create a future that’s diverse, and inclusive and benefit even the most underprivileged in society, Or it can widen the divide in the existing social landscape and in fact multiply it exponentially.


Fighting the lack of diversity just on paper and trying to change mindsets isn’t enough. The AI of the present builds the future and unless and until THAT isn’t feminist, egalitarian, inclusive, and diverse in all means we can’t and we should not expect that the future our AI creates will be so. In conclusion, we can say that we need to build AI that ‘thinks’ and ‘feels’, it should have ‘emotions’ and the ability to think rationally beyond the biases we transfer to it.




This article was us stepping into a slightly different zone than usual. But, it's important that we did so, we hope to build a community of diverse data scientists who can revolutionize AI.

Feel free to hit us up on our mailbox and if you liked this blog, please subscribe to our page so we can update you every time we write something new!


Also, Happy Learning and Changing the World!

And as always, Thanks for reading!





#diversity #promotingdiversity #AI #datascientist #blog #blogging #deeplearning #emotionallearning #inclusivetech

#datadweebs

109 views1 comment