On January 28, 1986, our nation watched in shock as 73 seconds into her flight, the Space Shuttle Challenger exploded, killing all seven crew members aboard. One of those members was a school teacher, Christa McAuliffe, whose involvement inspired large media coverage of the event.
The launch and the disaster were seen live in many schools across the United States. I was watching at 8:38 am when life changed forever for families, for NASA, and for the nation.
Here is how one national broadcast framed the event: “High Technology turned on us.”
High technology didn’t turn on us. Human judgment turned on us. A failure to look deeper turned on us. The cause of the disaster, O-ring seal failure, had been a concern previously raised and tragically ignored.
The beauty of being human is our ability to create. The technology we create has no end in sight. Nor should it.
The human judgment that accompanies it, that is another story.
Which is where ethics comes into play.
Ethics is a discipline that explores what we consider to be right or wrong, good or bad, helpful or harmful, and the basis upon which such decisions are made.
But ethics is more than a discipline. Ethics is a moral uprising. As humans, we allow diversity and choice in a number of ethical questions. Until we do not. Until someone does something so horrible the dignity of being human is offended, even assaulted, and we say enough.
Few CIOs likely majored in philosophy or ethics. You probably did not foresee the day when you would need to lead the way on ethical non-negotiables of technology.
That day is here. The conversations cannot be ignored. Nor can they be delayed. If you do not speak, someone else will. And that person’s agenda may be the very thing you are uniquely positioned to warn against.
The Three Discussions
As CIOs, you must lead your peers, teams and boards in three critical conversations: philosophical, practical and positional.
Every person you meet, every team you lead, every customer you serve, every company you work for experiences a reality daily. They have a Current State, they have a Desired State, and they have the gap (or gulf) between it.
We are always journeying to a desired state, and closing the distance between there and our current state, at least ideally.
Some companies don’t. They lose sight of the desired state. They go through the motions of business. They lose the right measures of success. It’s so easy to pick on Southwest Airlines again - they lost sight of being the nation’s number one airline in customer experience and affordable air travel. High technology didn’t fail them; they failed high technology.
Once you lose sight of the desired state, other desires creep in. Therein lies the way of corporate greed, wealth extraction, economic wreckage, dispossession, displacement and disempowerment.
You have a philosophical question to ask: As a company, how do we measure the human value that we provide?
If your desired state is to be a region’s top employer, or to contribute positively to the local economy, or to provide a future environment that is healthier than when you started, you will measure the use of AI differently than others who lose their way and merely look to the bottom line.
Other philosophical questions include:
What is our identity (why did we start, who were we before we listened to other voices or began comparing ourselves to other companies)?
How do we increase our capacity without compromising our identity and values?
What is the noble purpose that no amount of technology will ever distract us from and that every amount of wisely-used technology can help us accomplish?
Philosophical questions are not an empty exercise. They are the first filter of any action taken.
Seven issues have emerged as major ethical issues:
- Lack of transparency
- Violation of privacy
- Bias and discrimination
- Loss of human decision-making
- Misuse of data
- Lack of accountability
- Exposure to liability
The issues lead to 5 general sets of questions:
How do we best educate?
How do we promote transparency so that users are aware of when they are interacting with AI and understand its limitations and capabilities?
What existing governance must be reinforced and what new governance must be in place to ensure privacy, compliance and access?
How will we ensure that our AI systems can withstand adversarial attacks?
What safeguards are in place to protect against bias and discrimination?
Positional conversations bridge the philosophical and the practical. By its name, these conversations help position AI within the company.
What in our company is currently right that we can maximize to ensure the best AI adoption as possible? (Do you already have good governance? Do you know how to educate and roll out products? Is your cross-functional communication and collaboration healthy and effective)?
What in our company needs improvement to best move forward with AI? (Is shadow IT a problem? Is there a silo mentality? Is security owned by the company or is that a technology thing)?
What in our company is missing that prevents good AI implementation? (Governance? Data control? Checks and balances)?
What are we still confused about when it comes to AI?
Dr. Ellen Ochoa joined NASA in 1988 as a research engineer, two years after the space shuttle Challenger exploded. She would eventually become the 11th Director of the Johnson Space Center where she would achieve NASA’s highest award.
Her accomplishments did not come easily. One month after being named director of flight operations, the space shuttle Columbia disintegrated in flight, also killing all seven aboard. 17 years after the Challenger explosion, disaster had struck again, this time under Dr. Ochoa’s watch. She knew the problem ran deeper than engineering, and she vowed to do whatever it took to prevent such a tragedy from happening again. She did, and before her retirement, NASA launched 19 consecutive successful space shuttle flights.
How did Dr. Ochoa transform NASA and make launches safe again?
NASA had long been a performance culture where excellence of execution was the highest value. In such a culture, you prove your competence and protect your careers. Asking tough questions and raising strong concerns was not commonplace.
Dr. Ochoa transformed NASA into a learning culture. She began to ask questions. She facilitated discussions. She helped NASA return to their true North Star.
AI is happening on your watch. High technology will not turn on us. Neither will you.