Integrating ethics in the development of artificial intelligence (AI): an interview with the ICOR Prize winner and Professor Mercier
Systems based on AI are now commonly used in many areas of our work and lives. The rapid growth in the use, and impact, of such systems nevertheless raises several ethical challenges or questions for example relating to discrimination, transparency or even data privacy. How can organizations integrate such concerns when they are developing and deploying AI? This is one of the questions that Julia Guillemot (who has recently graduated from IÉSEG) looked to answer in the framework of her Master’s thesis*, which recently won the ICOR Prize. We spoke to Julia and her thesis director, Professor Guillaume Mercier, about the findings of this research.
Can you explain why and how you chose to study this topic? And why it is such an important topic for companies and organizations?
I started to develop an interest in artificial intelligence in 2016, the year of the Brexit referendum. I knew little about the technology then, but I was struck by the way social media algorithms were used to shape opinions, and the outsized impact they had. It made me realize the extent to which such algorithms can shape our lives.
I started researching the “filter bubble” effect before focusing on the ethics of AI and the responsibility of tech companies after watching Mark Zuckerberg’s first testimony to the US Congress in 2018. I had the feeling at this time that despite technology’s ubiquity, neither policy makers nor the main technology companies had taken the appropriate steps to ensure that ethical considerations came first.
Still today, the path to creating these systems ethically is less than clear, and there is no global playbook on AI. But following growing calls from the public and from governments, some companies have started to consider ethical considerations in the development of AI systems.
My thesis aims to contribute to the field of AI ethics by developing an understanding of how these companies and developers integrate ethical questions in the development and the deployment of artificially intelligent systems.
The specific ethical challenge for AI stems from the fact that it combines social & technological interaction: bringing together humans and machines. In other words, humans are shaped by the machines they shape. They develop the machine and feed it, not only with selected data but also with their views, values and biases. They use it, interact with it, and create new relations of knowledge and power. That is why the ethical questions raised by AI (trust, transparency, discrimination, privacy, etc.) – though not necessarily new – need to be rethought afresh in the context of AI.
Furthermore, companies, both those developing AI-based solutions and those using them, have an important ethical responsibility, as governments and regulations will always lag behind these rapid technological advances.
Finally, I would also add that this topic is also very important for academia. IÉSEG, for example, has recently highlighted that “Integrating AI and humanities” is one of the key orientations of its new 2022-27 strategic plan.
Can you please summarize the main findings of your work?
The literature review in the thesis makes three contributions to research. First, it establishes a framework for the different topics in AI ethics, structuring the emerging field and creating a categorization for the AI ethics questions under consideration. Second, it provides a map and analysis of the main questions covered by the academic field. Third, it offers a comprehensive review of the theoretical aspects of accountability through the lens of transparency.
The core of my thesis focused on three areas.
– I first centered my work on understanding how AI ethics questions actually arise within tech companies. My findings show that seven types of factors are critical to make AI ethics questions emerge internally and externally, providing insights on this first step toward ‘AI governance’.
– The second part of my work focused on the study of the ethical questions considered by tech companies, and how these differ from the questions considered by the academic field. I found that the corporate world and academia present very different conceptual understanding of the question of transparency, which impacts oversight, regulation, and the perceived corporate social responsibility that corporations deem to have.
– Finally, the last part of my thesis focused on how ethical questions are governed and the steps companies can take toward AI ethics governance.
Among the main points of interest of her work, Julia Guillemot has clearly shown the interrelated nature of all the ethical questions surrounding AI. She has also offered a comprehensive view of the triggers that can help raise ethical awareness and direct action within companies and their ecosystem.
Can you explain how your thesis might have applications for managers who are in the process of developing or deploying different types of artificial intelligence?
Perhaps one of the most important things to keep in mind is that artificial intelligence will continue to develop rapidly, and that ensuring that ethical considerations drive the development of AI-powered systems is the only way forward.
I believe we can’t continue to play « ethical catch up » when problems appear. Changing this paradigm will require systemic consideration of ethical questions at all the steps of the development of systems.
My work highlights the different triggers that will help kick start these conversations and the most efficient ones required to put in place solid AI governance. It also offers several recommendations:
– The biggest barrier to ethical AI governance is the lack of awareness and education. Companies and managers need to focus on raising awareness and developing expertise, both in-house, and by learning from the growing external ecosystem of expertise to stay at the forefront of this rapidly evolving field.
– Culture hugely impacts the emergence and implementation of AI governance. Diversity is key for certain triggers to arise, and inclusivity by design is the only way to have diverse sets of values represented in systems.
– The power of structures shouldn’t be underestimated: an efficient way to involve everyone in the operationalization of AI ethics is through existing and new structures.
– Finally, leadership is central: as leaders are in a unique position to work on education, culture and structures.
There will never be a global playbook on AI governance, but as the technology continues to rapidly evolve, putting ethics first is the only way forward.
*“The integration of ethical concerns in the development and deployment of artificially intelligent systems within technological companies. An essay on Artificial Intelligence Ethics.” Julia Guillemot, under the supervision of professor Guillaume Mercier.