Stories

Three Things That Keep Me Up at Night: Climate Change, AI Decisions, and Trust in Technology

I want to start by discussing three things that keep me up at night. The first, which may be quite common for you as well, is climate change. Climate change absolutely keeps me up at night.

The second concern is that people may have no idea that artificial intelligence is making decisions that directly impact their lives—such as determining the interest rate on a loan, deciding whether you get the job you applied for, or whether your child gets into their desired college. Today, AI is making decisions that directly affect you.

The third issue is that even when people are aware that AI is making decisions about them, they may assume that because it’s not a fallible human with biases, the AI will make morally or ethically sound decisions. This assumption could not be further from the truth. In fact, over 80% of proof-of-concept projects involving AI get stalled in testing because people do not trust the results from the AI model.

Building Trust in Artificial Intelligence: The Five Pillars

We’re going to talk a lot about trust, and when thinking about trust, there are actually five pillars to consider. When you’re thinking about what it takes to earn trust in an AI system developed or procured by your organization, consider these five pillars:

  1. Fairness: How can you ensure that the AI model is fair to everyone, particularly historically underrepresented groups?
  2. Explainability: Is your AI model explainable? Can you tell an end user what datasets were used to curate the model, what methods and expertise were involved, and the data lineage and provenance associated with how that model was trained?
  3. Robustness: Can you assure end users that the AI model cannot be hacked to disadvantage others or to benefit one person over another?
  4. Transparency: Are you informing people upfront that an AI model is being used to make decisions? Are you providing access to a fact sheet or metadata so that they can learn more about the model?
  5. Data Privacy: Are you ensuring the privacy of people’s data?

IBM’s Three Principles for AI in Organizations

IBM has established three principles when considering AI in an organization:

  1. The purpose of artificial intelligence is to augment human intelligence, not to replace it.
  2. Data and the insights derived from it belong to the creator alone.
  3. AI systems, and indeed the entire AI lifecycle, should be transparent and explainable.

Addressing the Socio-Technological Challenge of AI

As you think about earning trust in artificial intelligence, remember that this is not just a technological challenge. It can’t be solved by simply deploying tools and technology. This is a socio-technological challenge, meaning it must be addressed holistically. Here are three major aspects to consider:

  1. People and Culture: Consider the culture of your organization and the diversity of your teams. Who is curating the data to train the model? How many women and minorities are on that team? Diverse teams reduce the chance of error, which is crucial in AI.
  2. Process and Governance: What promises will your organization make to both employees and the market regarding standards for fairness, explainability, and accountability in your AI models?
  3. Tooling: What tools, AI engineering methods, and frameworks can you use to ensure these five pillars are met?