Can AI Help Foster a Compassionate Society: Insights from Michael Frank

Think first about what you want to do with your time in retirement. I’ve often seen folks retire only to return to work a short time later — not because they ran out of money, but because they didn’t know what to do with their time or didn’t feel useful. 

by John Perkins

I recently had a discussion with my friend – and one of the most brilliant people I know – Michael Frank. Michael earned a Ph.D. in theoretical nuclear physics and has authored nineteen publications in refereed journals on the structure of the atomic nucleus. He completed executive education courses in finance at Wharton and strategy courses at the Brookings Institution and rose to Vice President of a major aerospace company. In his second year of early “retirement” he was voted faculty of the year by MBA students at the Milgard School of Business at the University of Washington.

Author with Michael Frank

JP: Let’s get right down to it, Michael. Artificial Intelligence (AI) has been a prominent feature in the news recently. Some articles reflect fear, some fascination, some revenue generation, but almost all portray a level of uncertainty with what AI might mean to our future. A recurring theme is the pursuit of ethical AI, emphasizing the need to infuse compassion into AI systems. When asked about this, Michael, how do you respond?

MF: It’s vital that we, as a species, address this – and keep addressing it – because that very question acknowledges the potential impact of AI on society. As time moves forward and AI continues to grow in importance, the integration of compassion and empathy must become essential considerations into its design and behavior. 

Interestingly, the quest to include compassion in AI intersects with understanding human thinking patterns. Neural networks, inspired by the human brain’s structure, facilitate AI’s capacity to recognize patterns, learn, and adapt. This is similar to the human learning process. Researchers increasingly will need to explore how humans learn and adapt to replicate empathy and ethical reasoning within AI, aiming to simulate human-like compassion.

JP: What are some of the parallels between human learning and AI?

MF: Both humans and AI learn from exposure to information. In the case of AI, we call that “training sets” – that is to say, input data paired with the expected output. The AI model learns patterns from the training set to make predictions when presented with new, unseen data. One of the primary challenges is that bias in AI training sets mirrors human bias arising from limited experiences. AI systems learn from data provided to them, and if the training data contains biases or reflects limited perspectives, the AI can replicate those biases in its decisions and actions. 

Humans form biases due to their exposure to specific environments, cultural influences, and individual experiences. Limited exposure to diverse perspectives can lead to biased judgments and actions. Analogously, AI algorithms, when trained on biased data, tend to replicate, and sometimes even exacerbate, societal biases present in the training data.

Recognizing this parallel, efforts to mitigate bias in AI involve strategies akin to broadening human perspectives. Diverse and inclusive datasets are crucial to reducing bias, just as exposure to diverse experiences helps humans mitigate their own biases.

JP: Does requiring compassion of AI teach us anything about our own societal norms?

MF: Understanding the correlation between biased AI training sets and human biases underscores the importance of fostering diversity, inclusivity, and comprehensive representation in both AI datasets and human experiences. This raises the question posed at the onset: What can we learn from AI to help foster a more compassionate society? If we can recognize how our learning and our developed biases are similar to those of AI, then maybe we can develop more compassion and empathy ourselves. This is, by and large, the theme of my current book*.

JP: Much of what we’re hearing about AI is US-centric — that is, we’re not hearing a lot about what other countries are doing with AI. What concerns do you have about how other countries might use this technology?

MF: I believe most countries, including the US, are motivated by enhancing economic prosperity and security, and maintaining or gaining advantage in some way. It seems unlikely, with that motivation, that all the world’s players will equally require ethics, compassion, and empathy in their AI systems. Certainly, some progress could be made through regulation, but it seems to me that we’ll have to develop some kind of deterrent to potential aggression.

On the plus side, and maybe this is wishful thinking on my part, if we can collectively learn from our experience in the development of AI — requiring ethics and compassion — then maybe we can leverage that to improve our own societal norms.

JP: I’m intrigued by the fact that while governments – including those of the US and China – seem driven by a “them versus us” attitude, many businesses take a different approach. They see diverse countries as creating opportunities. International corporations build bridges to expand production and marketing opportunities. And increasingly they hire local people to manage and work in those countries. This is especially true in the growing tech fields. Would you care to comment on this? 

MF: I believe the motivation isn’t so different; that is, businesses are also motivated by economic prosperity and competitive advantage. They are, however, less sensitive to where we draw borders. Businesses are accountable to shareholders and are expected to generate a return with less regard to geo-political boundaries. 

JP: To expand on that “less sensitive to where they draw borders” statement, several executives I know claim that international corporations are in the best positions to solve some of the worst problems that face us, including climate change and the threat of another devastating world war, because corporate executives are not as inclined as government officials to view the world in “them versus us” terms. Do you agree? 

MF: I believe that is partly true, because businesses do have the flexibility to operate internationally. However, I tend to believe behavior is driven by incentives. Politicians are motivated by the opinions of their constituents (staying in office) and companies are motivated by their shareholders or investors (generating capital). As much as you and I might hope that either has altruistic motives, I’m afraid the reality is that power and profits are the real drivers. For example, if economic incentives are aligned with climate change, then companies would likely promote their role in reducing greenhouse gasses even if it’s not necessarily their primary objective. If it meant bankruptcy for their company, would they still act in the interest of the planet? The bottom line is, if we can align incentives with doing good deeds, then we all win.

JP: That begs the question: How do we align incentives with doing good deeds? We all know that Wall St is oriented around the goal of maximizing short-term profits. On the other hand, insurance companies increasingly see that climate change creates some of the greatest long-term risks. Some people advocate taxes on things like CO2. Do you see a likelihood that the goals of investors may change to emphasize long-term sustainability?

MF: I won’t pretend to have all the answers, but maybe a first step. I believe people are basically well intended, but may need a “nudge”, and a tax is probably a “non-starter”. I would begin instead with a voluntary, tax-deductible, charitable contribution for companies and individuals made directly to a program with a diverse board (government, industry, etc.) and a mission statement focused solely on addressing societal issues. The psychological effect here is that taxes are seen as a “takeaway” whereas a charitable contribution is seen as generous giving. These contributions could in fact be made visible – a kind of “bragging right” for the wealthiest among us to support the betterment of society. The incentive is really the improved image and legacy of the contributors.

JP: Michael, you’ve had an amazing career that ranged from being a chef in highly rated restaurants and hotels to a physics researcher at a major university to an executive at a Fortune 100 company. What are the three most important take-aways from these experiences you can pass on to young people who are faced with deciding what to do with their lives? 

MF: When considering career choices, many of us need to choose between something we’re passionate about and something that provides a comfortable income. If you’re lucky, those coincide. If not, look for a compromise you can live with. Secondly, from my own experience, I would advise you to be prepared to reinvent yourself — potentially many times. The ground beneath your feet is moving. If you get too comfortable you may soon find yourself obsolete. By the same token, be willing to recognize a dead end, and change paths. Finally, be true to your values. Look at yourself as a brand and think about your legacy. How do you want to be remembered?

JP: Now that you’ve been retired for seven years, how about three take-aways for retirees or people approaching that status?

MF: Think first about what you want to do with your time in retirement. I’ve often seen folks retire only to return to work a short time later — not because they ran out of money, but because they didn’t know what to do with their time or didn’t feel useful. Then put together a financial plan that allows you to achieve the things you want. And finally, prioritize your activities to those early retirement years when, hopefully, your health is still good. And one additional (if I’m allowed to give four), if you’ve been fortunate in your life, look for ways to give back — again, your legacy.

JP: Do you have hope for the future? And why do you feel that way?

MF: I do. I’ve met a lot of people in my life. And nearly all of them have been well intended. Our differences are largely due to our limited exposure and experiences — what I refer to in my book as our collection of models and metaphors. If we can recognize that we all have biases based on those experiences – like a training set for an AI system – maybe we can begin to have more empathy and compassion for each other.

JP: Anything you’d like to add as a final comment?

MF: Thank you for the opportunity to speak with you and your audience. We all have opportunities in our lives to make a difference. Big or small, it all matters.