Advertisement

Tools supporting AI fairness slow to emerge

May 05, 2019

A7075729-6011-421D-A558-F7CC2DFA8E54-May 05, 2019

Special Project Logo - 1000 px Editor's Note: This article is part of an AspenCore Special Project on AI Fairness.

BOULDER CREEK, Calif. – There are lots and lots of guidelines and best practices for defining AI fairness and what to do about bias, as we describe in a companion article in this special report, Key requirements for AI Fairness. There are not a lot of tools that design engineers could use to detect and correct bias in algorithms or datasets, however. Such tools are lacking both for engineers developing their own products and for customers who want to optimize third-party AI systems.

Only a few technology firms have announced open-source debiasing tools for developers, but some university researchers have created various types of debiasing tools, as we report in another companion article, AI experts address key questions about AI fairness. In addition, some major financial management firms like KPMG and CapGemini are designing their own comprehensive, enterprise-scale AI management solutions for their clients. These frameworks and tool suites sometimes include tools for engineers.

Stanford debiasing

To reduce bias in word embeddings, a debiasing algorithm is used to remove gender associations in the embeddings of gender-neutral words. Shown here, the number of stereotypical (left) and appropriate (right) analogies generated by word embeddings before and after debiasing. The algorithms were also evaluated to ensure they preserved the desirable properties of the original embedding, while reducing both direct and indirect gender biases.

Some tools are appearing in specific industries that have either experienced high-profile machine-learning-gone-bad horror stories, or are at risk for them. These include financial services, such as credit scoring, credit risk, and lending, as we detail in another article in this special report, Reducing Bias in AI Models for Credit and Loan Decisions– as well as hiring decisions. There are also debiasing and auditing services available (see the "Methodologies, Frameworks..." box at the bottom of this page).

The first rudimentary AIs date back to the 1950s. One might have expected the issue would have come up before this, but questions of fairness tend not to get raised until someone is treated unfairly, and even with all that history, the deployment of AI systems is still new. "So now we're learning things we didn't even envision before," said Aleksandra Mojsilovic, IBM fellow, head of AI Foundations and co-director of IBM Science for Social Good. "We're in a fact-based learning period."

Defining what is AI is also problematic: the line between AI on one hand and data science or data mining on the other is blurred, because much of the latter has been repackaged and advertised as AI.

Implementing AI fairness and figuring out how that applies to the product you're developing is complex: it requires being in the mind of both the practitioner and the user, said Mojsilovic. Developers need to understand what fairness means, keep it in mind, and imagine how their product might impact users positively or negatively.

That underlines why diversity is so important. "Suppose you're building a product and its model. You must check the data for balances and imbalances," she said. "Or, say you find out the data was collected inappropriately for the problem you're trying to solve. You may need to check the model to make sure it's fair and well balanced. Then, you put the algorithm into production and maybe the model sees users it did not see during its training phase. You have to check fairness then, too. Checking fairness and mitigation has to happen throughout the lifecycle of the model."

Fairness checking and mitigation tools will be both commercial and open-source, said Mojsilovic. "Both are equally important, and they will play equally going forward." Eventually, such tools will be used in more sophisticated systems, so some will be vertical, such as industry-specific or user group-specific. Hybrid open-source/commercial tools systems will also be used.

New technology is often used first where laws and regulations have the most implications to those industries. So it's not surprising to see many fairness and mitigation tools being developed in finance, where many decisions made by humans and AIs are heavily scrutinized, said Mojsilovic. These tools will also benefit the legal professions and credit scoring, other areas where the cost of errors is large.

Methodologies, Frameworks and Services for Debiasing AI

Some financial management firms have designed their own AI management frameworks and toolsuites for their clients that sometimes include tools for engineers. We also list some suggested methodologies that are more specific than guidelines, as well as debiasing and audit services .

Methodologies

Enterprise-scale AI management frameworks

  • CapGemini - Perform AI solutions portfolio for manufacturing and other sectors addresses ethics concerns, and includes an AI Engineering module for delivering trusted AI solutions in production and at scale.

  • KPMG - AI in Control framework includes a Technology management component. “Guardians of Trust” white paper discusses AI fairness and ethics in context of the wider issues of corporate-wide trust and governance in analytics.

  • Accenture - AI Fairness Tool lets clients assess algorithm models and underlying data to correct for bias in both in-house developed AI and third-party solutions. "Teach and Test Tool" helps clients validate and test their AI systems, including model debugging, so they act responsibly.

Debiasing and Algorithmic Auditing Services

  • O’Neil Risk Consulting & Algorithmic Auditing, Headed by Cathy O’Neil

  • Alegion, Does training data prep by converting data into large-scale AI training datasets for enterprise-level AI and machine learning, and validates model, via a human-assisted training data platform to eliminate bias.

>> Continue reading page two of this article originally published on our sister site, EE Times: "Not a Lot of Debiasing, Auditing Tools Yet."

Loading comments...