Three Things That Keep Me Up at Night: Climate Change, AI Decisions, and Trust in Technology
I want to start by discussing three things that keep me up at night. The first, which may be quite common for you as well, is climate change. Climate change absolutely keeps me up at night.
The second concern is that people may have no idea that artificial intelligence is making decisions that directly impact their lives—such as determining the interest rate on a loan, deciding whether you get the job you applied for, or whether your child gets into their desired college. Today, AI is making decisions that directly affect you.

Building Trust in Artificial Intelligence: The Five Pillars
We’re going to talk a lot about trust, and when thinking about trust, there are actually five pillars to consider. When you’re thinking about what it takes to earn trust in an AI system developed or procured by your organization, consider these five pillars:
- Fairness: How can you ensure that the AI model is fair to everyone, particularly historically underrepresented groups?
- Explainability: Is your AI model explainable? Can you tell an end user what datasets were used to curate the model, what methods and expertise were involved, and the data lineage and provenance associated with how that model was trained?
- Robustness: Can you assure end users that the AI model cannot be hacked to disadvantage others or to benefit one person over another?
- Transparency: Are you informing people upfront that an AI model is being used to make decisions? Are you providing access to a fact sheet or metadata so that they can learn more about the model?
- Data Privacy: Are you ensuring the privacy of people’s data?
IBM’s Three Principles for AI in Organizations
IBM has established three principles when considering AI in an organization:
- The purpose of artificial intelligence is to augment human intelligence, not to replace it.
- Data and the insights derived from it belong to the creator alone.
- AI systems, and indeed the entire AI lifecycle, should be transparent and explainable.
Addressing the Socio-Technological Challenge of AI
- People and Culture: Consider the culture of your organization and the diversity of your teams. Who is curating the data to train the model? How many women and minorities are on that team? Diverse teams reduce the chance of error, which is crucial in AI.
- Process and Governance: What promises will your organization make to both employees and the market regarding standards for fairness, explainability, and accountability in your AI models?
- Tooling: What tools, AI engineering methods, and frameworks can you use to ensure these five pillars are met?
