https://www.technologyreview.com/2021/01/06/1015779/what-buddhism-can-do-ai-ethics/
Buddhism teaches us to focus our energy on eliminating suffering in the world.
Ms Tech | Unsplash
The explosive growth of artificial intelligence has fostered hope that it will help us solve many of the world’s most intractable problems. However, there’s also much concern about the power of AI, and growing agreement that its use should be guided to avoid infringing upon our rights.
Many groups have discussed and proposed ethical guidelines for how AI should be developed or deployed: IEEE, a global professional organization for engineers, has issued a 280-page document on the subject (to which I contributed), and the European Union has published its own framework. The AI Ethics Guidelines Global Inventory has compiled more than 160 such guidelines from around the world.
Unfortunately, most of these guidelines are developed by groups or organizations concentrated in North America and Europe: a survey published by social scientist Anna Jobin and her colleagues found 21 in the US, 19 in the EU, 13 in the UK, four in Japan, and one each from the United Arab Emirates, India, Singapore, and South Korea.
Guidelines reflect the values of the people who issue them. That most AI ethics guidelines are being written in Western countries means that the field is dominated by Western values such as respect for autonomy and the rights of individuals, especially since the few guidelines issued in other countries mostly reflect those in the West.
Guidelines written in different countries may be similar because some values are indeed universal. However, for these guidelines to truly reflect the perspectives of people in non-Western countries, they would need to represent the traditional value systems found in each culture as well.
People both in the East and the West need to share their ideas and consider those from others to enrich their own perspectives. Because the development and use of AI spans the entire globe, the way we think about it should be informed by all the major intellectual traditions.
With that in mind, I believe that insights derived from Buddhist teaching could benefit anyone working on AI ethics anywhere in the world, and not only in traditionally Buddhist cultures (which are mostly in the East and primarily in Southeast Asia).
Buddhism proposes a way of thinking about ethics based on the assumption that all sentient beings want to avoid pain. Thus, the Buddha teaches that an action is good if it leads to freedom from suffering.
The implication of this teaching for artificial intelligence is that any ethical use of AI must strive to decrease pain and suffering. In other words, for example, facial recognition technology should be used only if it can be shown to reduce suffering or promote well-being. Moreover, the goal should be to reduce suffering for everyone—not just those who directly interact with AI.
We can of course interpret this goal broadly to include fixing a system or process that’s unsatisfactory, or changing any situation for the better. Using technology to discriminate against people, or to surveil and repress them, would clearly be unethical. When there are gray areas or the nature of the impact is unclear, the burden of proof would be with those seeking to show that a particular application of AI does not cause harm.
Do no harm
A Buddhist-inspired AI ethics would also understand that living by these principles requires self-cultivation. This means that those who are involved with AI should continuously train themselves to get closer to the goal of totally eliminating suffering. Attaining the goal is not so important; what is important is that they undertake the practice to attain it. It’s the practice that counts.
Designers and programmers should practice by recognizing this goal and laying out specific steps their work would take in order for their product to embody the ideal. That is, the AI they produce must be aimed at helping the public to eliminate suffering and promote well-being.
For any of this to be possible, companies and government agencies that develop or use AI must be accountable to the public. Accountability is also a Buddhist teaching, and in the context of AI ethics it requires effective legal and political mechanisms as well as judicial independence. These components are essential in order for any AI ethics guideline to work as intended.
Another key concept in Buddhism is compassion, or the desire and commitment to eliminate suffering in others. Compassion, too, requires self-cultivation, and it means that harmful acts such as wielding one’s power to repress others have no place in Buddhist ethics. One does not have to be a monk to practice Buddhist ethics, but one must practice self-cultivation and compassion in daily life.
We can see that values promoted by Buddhism—including accountability, justice, and compassion—are mostly the same as those found in other ethical traditions. This is to be expected; we are all human beings, after all. The difference is that Buddhism argues for these values in a different way and places perhaps a greater emphasis on self-cultivation.
Buddhism has much to offer anyone thinking about the ethical use of technology, including those interested in AI. I believe the same is also true of many other non-Western value systems. AI ethics guidelines should draw on the rich diversity of thought from the world’s many cultures to reflect a wider variety of traditions and ideas about how to approach ethical problems. The technology’s future will only be brighter for it.
Soraj Hongladarom is a professor of philosophy at the Center for Science, Technology, and Society at Chulalongkorn University in Bangkok, Thailand.