By Vivian Liu
The software revolution has taken the world by storm. Many things that we see or use in our everyday lives are automated: from data processing to virtual machine assistants, computer programs are helping us complete tasks that would otherwise be extremely resource consuming.
Questions that would have taken significant human capital can be pipelined to a computer program or inputted to a function for a quick output. Software is particularly helpful in allowing us to answer large numbers of objective questions en masse. For example, TurboTax is a well-established computer program that combines user interface and computation to help humans calculate their taxes more efficiently.
Although such advances have clearly benefited society, it is also important to take a step back to scrutinize the demographics of the programmers behind the code. The technology industries suffer from an acute lack of diversity: only 10% of researchers in Artificial Intelligence (AI) at Facebook are women, and 4.5% of workers at Google are African-American. Currently, women hold only 25% of all jobs in tech in Silicon Valley, and this number has been on the decline over the last few years. This issue also extends to the degrees awarded each year: according to a study conducted by the U.S. Equal Employment Opportunity Commission, each year men receive at least 70% of all degrees in computer science, mathematics, and engineering. In the 1980’s, women earned 37% of all computer science degrees, but today, that number has dropped down to 18%.
There are many reasons for such a stark disparity in gender and racial diversity in the technology sector. Part of the inequality stems from unequal education opportunities: from a young age, women and minority groups generally get less exposure to such fields, and are therefore in an unfair position to compete with those who regularly participate in and learn about STEM-related activities. Even those who do receive degrees in tech-related fields are discouraged to continue pursuing a career in a field that is so largely male and white-dominated. In addition, women in the technology industry are just as victimized by the “80 cents to the dollar” inequity that permeates the American workplace.
This lack of ethnic and gender diversity in AI causes issues that augment the marginalization of minority groups. Due to sexism in the workplace, which is as pervasive as this technology is, there is a significant chunk of the population being left out of the development and progress of this field. These disparities in demographic have immense implications in Artificial Intelligence, a huge field of computer science which has been steadily growing in presence.
In particular, one of the tasks that can be handled by AI is asking a machine a subjective question. Whereas a tax calculation system can punch some numbers into some objective function and get an instantaneous answer, a machine trying to answer a subjective question must learn to simulate how a human being would answer a question. This requires AI, which involves writing code that can be trained through data processing to perform tasks that are not as easy as a simple calculation.
For example, there is no easy mathematical function that can allow a machine to perform the subjective job of an application reviewer. Instead of a straightforward operation, the goal of a programmer would be to create a machine to simulate the reviewer as closely as possible. By inputting data consisting of a human application reviewer’s decisions under different conditions, a machine can be taught to model a human’s actions in reviewing an application.
Amazon famously set about creating such a program in 2014. Programmers gathered data from past hires—given different components of a resume (university degrees, GPA, experiences, and so on), a human reviewer would pipe their hiring result into a machine learning program that created a complex mathematical model that outputted a score based on the applicant’s resume. In principle, this program would provide a score to an applicant based on the strength of an application so that human reviewers can automatically screen applications below a certain threshold score and give special consideration to applications with scores above some threshold. However, what people found was that implicit biases in the human reviewer data that the programmers used in their model tended to discriminate against women—the most alarming of the indicators was that an occurence of “women’s college” in the applicant’s resume would automatically result in a score reduction. In fact, anything involving “women,” like the phrase “women’s chess club” would result in a score reduction. This is an especially concerning occurence of data that reflects harmful biases being implemented and magnified in AI applications.
In addition to this application reviewing program, another example of this problem in AI is in facial recognition AI software. Accordingly, for the same computer program, facial recognition works better for white males than for any dark-skinned individuals. The difference in facial recognition accuracy between fair-skinned and dark-skinned individuals does not necessarily come from bad intentions, but it is an unfortunate byproduct of a skewed set of programmers, and therefore, a skewed set of training data and code.
People are recognizing the magnitude of the issue and many are fighting to remedy this problem. An example is Stanford AI Professor Fei-Fei Li’s efforts to encourage young women to pursue AI-related careers—in 2015, she founded a summer program called “AI4ALL” which draws girls from all over the world to study computer science at Stanford University. In addition, outreach programs such as STEM Starters at Columbia University are connecting with schools in the inner-city to spark interest among students in STEM, helping to equip the next generation to overcome the imbalance in the tech industry.
This is AI’s diversity crisis. The computer programs that help us with our everyday lives have real programmers behind them, and this community suffers from an acute lack of diversity. Extremely sensitive software such as surveillance and facial analysis are currently created by predominantly male and people who are not of color. Without a diversification of perspectives behind computer code, the same biases and inequities that affect society today will be magnified and perpetuated by computer programs. As AI continues to advance and become a larger presence in our lives, continuing this current trend of AI diversity will have dire consequences for society.
Leave a Reply.