contact Search
Search
Insight

Mobilizing your workforce behind ethical AI transformation

Olavi Valli

Artificial intelligence (AI) needs the right skills, ethics and organizational buy-in to deliver its transformational potential. But laying these foundations will take your business into new and unfamiliar territory. So how can you make sure you have the necessary capabilities in place and build these into an AI-ready operating model? 

The interface between people and technology has always been a critical element of innovation and modernization. Success requires the right talent, skills and willingness to embrace change. AI – including the latest advances in generative AI (GenAI) – adds new and complex dimensions to this people and technology interface.

Understanding and trusting AI 

The first is fear of the unknown. Your employees may be wary of technologies they believe could take away their jobs. How can you reassure them? How can you bring them along on the journey ahead?

Even if your employees don’t believe their jobs are at risk, they may lack confidence in their ability to use the models. The right technical skills in areas such as systems training and prompting are therefore clearly important, not just among technical teams, but organization-wide. Just as important is how to build trust in these technologies and their outputs, as well as applying them to jobs and ways of working likely to be very different as a result of AI. Research in the US indicates that most business functions and more than 40% of all work can be augmented, automated or reinvented with GenAI. 

How, then, can you develop the upskilling, confidence and trust to make the most of the AI potential? 

Using AI ethically and responsibly

The other dimension at this new interface of people and technology is ethics. The functionality of the AI needs to be understood – its limitations as well as its possibilities. It also needs to be used responsibly. Yet all too often, we see news stories exposing AI bias, ‘hallucination’ (when an AI model generates an output that’s factually incorrect or not based on reality) and even toxic behaviour.  

Without appropriate training and oversight AI can also build up biases. Model assumptions and assessments become distorted, with examples including discrimination against particular population groups when reviewing loan applications or pricing insurance cover. Alongside the regulatory risks inherent in such AI failures, lapses in ethics and control can lead to severe reputational damage.  

So why can ethics break down? The ‘black box’ inner workings of many AI models can make them difficult, though by no means impossible, to monitor and control. The fault-lines also include limited understanding of how the different forms of AI function and the risks this opens up. Pressure to make use cases work and deliver returns can heighten the likelihood that material risks arise.   

How then can you establish and embed ethics in use cases, operations and deployment of AI? 

The way forward 

We believe that these people considerations should be at the core of an AI-ready target operating model. As you look to deliver the required people capabilities, five priorities stand out. 

Let’s talk

We’re working with businesses across all sectors to help them deliver the transformational potential of AI. Talk to us if you’d like to know more.