Advertisement

Key requirements for AI fairness

May 02, 2019

A7075729-6011-421D-A558-F7CC2DFA8E54-May 02, 2019

Special Project Logo - 1000 px Editor's Note: This article is part of an AspenCore Special Project on AI Fairness.

BOULDER CREEK, Calif. -- A question that often arises when discussing AI fairness is, who decides what fairness means? Can anyone agree on a definition, and on how developers can apply it to algorithms and models? One paper presented at the 2018 ACM/IEEE International Workshop on Software Fairness found 20 different definitions. What different perspectives and disciplines, which stakeholders, need to be involved in determining the definition, or definitions if more than one are needed?

Many organizations and research groups have published guidelines and best practices for delineating what AI fairness and ethics should be and how these should be implemented (see Guidelines links below). Beyond these suggestions, there are also some recent attempts at codifying guidelines and best practices into law (see Law links, bottom of page 2). Several guidelines have been published by multi-disciplinary, multi-stakeholder groups, such as the Partnership on AI, which now has about 90 partners including both non-profits and companies, such as IBM. The most recent, and probably the most high-profile, is the European Ethics Guidelines for Trustworthy AI, released in April.

The EC guidelines reflect a consensus emerging during the last couple of years in the major concepts and concerns, in particular that AI should be trustworthy, and that AI fairness is part of that trustworthiness. As articulated by the EC, trustworthy AI has seven key requirements:

  1. Human agency and oversight

  2. Technical robustness and safety

  3. Privacy and date governance

  4. Transparency

  5. Diversity, non-discrimination and fairness

  6. Environmental and societal well-being

  7. Accountability

For IBM, four main areas, which comprise most of these seven, contribute to AI trust. They are 1) fairness, which includes detecting and mitigating bias in AI solutions; 2) robustness or reliability; 3) explainability, which is knowing how the AI system makes decisions so it's not a black box; and 4) lineage or traceability.

Francesca Rossi, AI ethics global leader at IBM, is one of the 52 members of the EC’s High-Level Expert Group on Artificial Intelligence that created its guidelines. She told EE Times a little about how the EC group arrived at its list. "We started from the European Charter of fundamental rights, then listed four main principles, and finally defined seven concrete requirements for trustworthy AI," she said. "Common in all these efforts, and to IBM's trust and transparency principles, is the idea that we need to build trust in AI and also in those who produce AI by adopting a transparency approach to development."

Rossi, who also co-authored the November 2018 AI4PEOPLE Ethical Framework for a Good AI Society, emphasized that each successive effort in writing guidelines builds on earlier ones. For example, the authors of the AI4PEOPLE initiative, whose paper contains several concrete recommendations for ethical AI, first looked at many guidelines and standards proposals. These included the 23 principles for AI from the 2017 Asilomar conference, the Montreal Declaration for Responsible AI, the IEEE standards proposals, the European Commission's earlier draft AI ethics guidelines, the Partnership on AI's tenets, and the AI code from the UK Parliament's House of Lords report. The authors grouped these together and came up with a synthesis of five principles for AI development and adoption.

Work to combine and merge these different efforts is ongoing, even if it doesn't result in the same principles for different kinds of AI or for different geographical and cultural regions. "It's important that this work of combining and merging is done in a multi-disciplinary way, with the collaboration of social scientists and civil society organizations -- for example, the ACLU -- that understand the impact of this technology on people and society, and the possible issues in deploying AI in a pervasive but not responsible way," she said.

IBM has created its own practical guide for designers and developers, Everyday Ethics for Artificial Intelligence. The guide helps developers avoid unintended bias andthink about trusted AI issues at the very beginning of the design and development phases of an AI system. "This is fundamental because the desired properties of trusted AI cannot be added to a system once it's deployed," said Rossi.

Guidelines and Best Practices for Achieving AI Fairness

There are several proposals for guidelines, best practices, and standards for algorithms and the deployment of AI. Here's a sampling.

>> Continue reading page two of this article originally published on our sister site, EE Times: "Can AI Fairness Be Regulated?."

Loading comments...