How can organizations manage Artificial Intelligence wisely?

Over the last years, Artificial Intelligence (AI) has been rapidly introduced into the workplace, across different sectors and job functions.  While companies are using AI to generate value, its implementation has also created challenges not only in terms of the technology itself, but also in terms of workplace organization and culture, and the ethical questions that it can raise.  Together with her co-authors from the Vrije Universiteit Amsterdam, Professor Lauren WAARDENBURG from IÉSEG has been studying eight organizations’ experiences of implementing and using AI, to shed light on what is really happening in practice. The findings of their work are published in a new book* on “Managing AI Wisely” published by Edward Elgar Publishing.

Over the last years, there have been a multitude of media articles and reports about the potential of AI and the threats or problems it can pose. These have included some high-profile examples of failures or problems, for example when companies have used recruitment algorithms that can show bias when hiring candidates.

AI has become increasingly accessible for organizations everywhere and clearly has potential to bring huge value to society. It is already used a wide variety of fields including planning & logistics, knowledge representation, language processing (translating, interpreting, etc.), image recognition and visioning, and information retrieval (e.g. search engines) to name but a few. However, we believe there is still great potential to improve the way it is implemented,” explains Lauren WAARDENBURG.

The authors have identified 4 challenges typically met by companies implementing AI: organizing for data, testing and validating (AI), algorithmic brokering, and the changing nature of work. They have looked to explain how to overcome these challenges and answer questions such as what data do I need? when is a system good enough to take over tasks? and how can my employees be prepared for working with AI?

We have found that there generally are four broad categories of ‘organizational’ challenges with implementation. However, responding to these challenges should not be seen a series of linear steps – but rather as a continuous process as the nature of these algorithms means they are constantly learning from the data they are fed,” explains Lauren Waardenburg.

1.Organizing for data

Data is the central building block of AI (see description below). Organizations looking at developing AI should start by asking what kind of, and what amounts of, data they are going to need to develop and train the AI system.  They should also focus on how this data will be categorized”, she notes. Technical developers will be required to construct the dataset (s), but managers need to be aware and reflect on the types of data that will be needed and the implications this could have for their organization. With the example of a recruitment algorithm, “how can we ensure that there is adequate data from across an organization to avoid any potential biases in terms of predictions that will be made by the tool”.

2. Organizing for testing and validating

After the first ‘data’ step, many organizations then rely on developers for testing and validating (systems and predictions) a system as this is generally considered a technical process. “This can be a harmful assumption – as testing is not only a tech issue”, she explains. For example, a system can be beautifully created, predicting outcomes with high rates of success, but it might be unusable or difficult to implement within an organization. “Therefore, it is important for managers to be involved throughout the testing phase as they need to reflect on how this can be implemented in the workplace and the embedded in the work processes that will be required,” she notes. “Another central question is will it really bring added value to the end user and for management to reflect on any ethical and legal aspects related to the development of the system. “

3. Organizing for algorithmic brokers

When the AI system has been tested and validated it doesn’t mean that it can be automatically rolled out within an organization. “The prediction and insights from the learning tool need first to be interpreted and translated,” explains Professor WAARDENBURG. “More and more organizations are choosing to create a role for a so-called algorithmic broker to deal with this vital step in the AI process. He or she is a bridge between the AI system and the user.  This person is going to translate the mathematical predictions into the context of the workplace; he or she therefore needs to have a good technical understanding of the tool/model and the insights, but also a very good understanding and the work of the end users. They have a very important role within AI implementation, so this needs to be clearly defined. “

The need for these ‘brokering’ experts counters somewhat the idea that AI will replace humans, she adds, but this doesn’t mean that work won’t change with AI systems”.

4. Organizing for Changes to work: The ‘self-learning’ dimension of AI provides new knowledge for an organization. That is why AI, unlike previous technologies, is often focused on knowledge work. “When you implement new knowledge in an organization, where knowledge is often implicit and distributed across different groups of actors, there is a good chance that this will cause (unexpected) changes to work. It is very important that organizations plan and prepare workers to work together effectively with these tools.

To help organizations manage AI systems successfully in practice, the authors have proposed a summary of 4 recommendations to manage AI” wisely”:

  • Work-related insights: AI ​​systems should be based on work-related insights concerning data, testing and validation, algorithmic brokering, and changes to work.
  • Interdisciplinary knowledge: Different disciplines (developers, users, managers) should be brought together and, where necessary, additional training should be provided.
  • Sociotechnical change processes: The introduction of AI should be seen as an organizational change process and conversely, the technology should be tailored to the needs of the work processes.
  • Ethical awareness: Discussions should take place regarding ethical considerations and ‘explainability’ of AI ​​systems and their underlying assumptions.

We are convinced that managers, as important decision-makers in the organization, have a central role and responsibility in implementing and managing AI systems. However, we do not imply that the described responsibilities can or should be performed by a single manager. A wise manager creates a wise team to wisely introduce and manage AI. We hope our work will convince managers to look beyond the ‘AI ​​hype’ and keep asking themselves at all times, irrespective of which stage they are in: Are we managing AI wisely?”

What is Artificial Intelligence?

Artificial intelligence has been around since the middle of the last century but has become more important for organizations in recent years due to technological advances and the capacity of organizations to capture and use data. Over time the definition has changed, explains Professor Waardenburg: “We now generally consider AI as a tool that can do things that humans usually do, particularly in terms of cognitive tasks such as problem-solving. AI is now closely linked to machine learning algorithms that can learn and improve through experience and using data”.

There are three main fields of machine learning systems that permit to distinguish between different types of learning: supervised learning, systems which are very dependent on humans as they have to feed the (labelled) data; unsupervised learning, which is less dependent on humans; and reinforcement learning, where technology is required to make a sequence of decisions.

*Managing AI Wisely: From Development to Organizational Change in Practice (Edward Elgar Publishing): Lauren Waardenburg, Assistant Professor of Information Systems, (IESEG), Marleen Huysman, Professor of Knowledge and Organization and Marlous Agterberg, Research and Valorization Manager, KIN Center for Digital Innovation, Vrije Universiteit Amsterdam, the Netherlands.