Advertisement

AI experts address key questions about AI fairness

May 01, 2019

A7075729-6011-421D-A558-F7CC2DFA8E54-May 01, 2019

Special Project Logo - 1000 px Editor's Note: This article is part of an AspenCore Special Project on AI Fairness.

BOULDER CREEK, Calif. -- We talked to some renowned researchers who are devoted to solving the eternal AI issue of fairness, robustness, explainability and data provenance. We wanted to find out the state of current research in AI fairness, and how that research can be translated into guidelines, best practices, frameworks, and actual solutions and tools for developers.

The experts in academia and industry we interviewed for this special report provided the following answers to five basic questions about AI fairness and ethics.

1) EE Times' research indicates that the main issues in AI fairness as it relates to our engineer/developer audience are fairness, robustness, explainability, and data provenance. Is this the same list you would give?

RUS: Yes. All autonomy decision making will need a set of tools to assure consumer trust in their operations. This includes explanation and interpretability of decisions so we know what is going on, privacy so people can share their data, fairness, and accountability and provenance to make sure the data and systems are not misused.

ZOU: All of these – fairness, robustness, explainability and data provenance – are essential issues, especially as algorithms are being deployed more broadly in many applications. These issues are also very much intertwined and related to each other. Fairness in the end comes down to robustness and governance questions. When we design AI algorithms we want them to be fair, and this means robust when deployed in different geographic settings and populations. Data governance is also closely related to fairness. 

The AI Experts

  • Daniela Rus is director of MIT's Computer Science and Artificial Intelligence Lab (CSAIL), and professor of electrical engineering and computer science

  • James Zou is assistant professor of biomedical data science and, by courtesy, of computer science and of electrical engineering, Stanford Institute for Human-Centered Artificial Intelligence

  • Francesca Rossi is AI Ethics Global Leader at IBM, and member of the European Commission’s High-Level Expert Group on Artificial Intelligence

ROSSI: We believe that trust is essential for fully adopting AI and reaping its beneficial impact. At IBM, we defined a holistic approach to achieving 'trusted AI,' with four key areas: fairness, which is how to detect and mitigate bias in AI solutions; robustness, which is about security and reliability; explainability, which is knowing how the AI system makes decisions so it is not a black box; and lineage, which refers to ensuring that all components and events are trackable throughout the complete lifecycle of an AI system.

Stanford Gender bias

Word embeddings, a popular framework to represent text data as vectors, are widely used in many machine learning and natural language processing tasks, but they exhibit gender and racial stereotypes. Metrics based on them can characterize how gender stereotypes and attitudes toward ethnic minorities in the US evolved during the 20th and 21st centuries. Shown here, the average word embedding gender bias over the occupations over time, overlaid with the average women’s occupation relative percentage, suggests that bias has been decreasing. The metrics and framework developed can be applied to revealing bias in other types of datasets. (Source: PNAS)

2) Do AI fairness/bias issues that concern developers revolve around mostly datasets, or algorithms/models, or both, and how are the problems different for each?

ZOU: While the big issues of how algorithms can become biased and unfair involve biased datasets, it also happens because training algorithms have their own biases. Learning algorithms can propagate and even amplify biases in the training data. That's because all learning algorithms are 'greedy': that is, they're designed to optimize for very narrow objectives. Say an algorithm is optimized for the maximum overall accuracy of the entire population in the test dataset. But that population includes a specific minority group that is, for example, less than 1%. If an algorithm trains to optimize for the overall general population it will ignore those minority subgroups.

RUS: The problems exist across the board starting from data collection, to preparation and handling, to the learning algorithms which ultimately consume the data. On the data collection side biases can emerge when the data you collect is unrepresentative of reality or when existing human biases are amplified during the collection process. For example, facial detection systems trained on more light skinned faces than dark skinned faces will be biased to be more accurate towards light skinned individuals. Identifying these internal biases within large datasets can be extremely time consuming so it’s crucial for our algorithms and models to be able to also learn these underlying biases automatically in order to mitigate them.

ROSSI: Data is not the only source of possible bias that could be embedded. For example, bias could be embedded in the choice of possible decisions the machine should make. Instead of a 'yes or no' decision, it might be a 'one-out-of-many' decision, so the algorithm could be biased if we leave out an option to consider.

IBM AI rating

In a proposed two-step procedure for rating an AI service against bias when its training data is not available, input data in stage 1 (T1) is unbiased. If, after analysis, the output is found to be biased, that means the system introduced bias. If the output is unbiased, in stage 2 (T2) the system is subjected to biased input data and its output analyzed again. If output is biased, the system doesn't introduce bias, but follows whatever bias is present in the input data. But if the output is unbiased, that means the system has not only not introduced bias, but also can compensate for any bias in the input data. (Source: IBM)

>> Continue reading page two of this article originally published on our sister site, EE Times: "AI Researchers Answer the 5 Big Questions About Fairness."

Loading comments...