Olavi Valli
Artificial intelligence (AI) needs the right skills, ethics and organizational buy-in to deliver its transformational potential. But laying these foundations will take your business into new and unfamiliar territory. So how can you make sure you have the necessary capabilities in place and build these into an AI-ready operating model?
The interface between people and technology has always been a critical element of innovation and modernization. Success requires the right talent, skills and willingness to embrace change. AI – including the latest advances in generative AI (GenAI) – adds new and complex dimensions to this people and technology interface.
The first is fear of the unknown. Your employees may be wary of technologies they believe could take away their jobs. How can you reassure them? How can you bring them along on the journey ahead?
Even if your employees don’t believe their jobs are at risk, they may lack confidence in their ability to use the models. The right technical skills in areas such as systems training and prompting are therefore clearly important, not just among technical teams, but organization-wide. Just as important is how to build trust in these technologies and their outputs, as well as applying them to jobs and ways of working likely to be very different as a result of AI. Research in the US indicates that most business functions and more than 40% of all work can be augmented, automated or reinvented with GenAI.
How, then, can you develop the upskilling, confidence and trust to make the most of the AI potential?
The other dimension at this new interface of people and technology is ethics. The functionality of the AI needs to be understood – its limitations as well as its possibilities. It also needs to be used responsibly. Yet all too often, we see news stories exposing AI bias, ‘hallucination’ (when an AI model generates an output that’s factually incorrect or not based on reality) and even toxic behaviour.
Without appropriate training and oversight AI can also build up biases. Model assumptions and assessments become distorted, with examples including discrimination against particular population groups when reviewing loan applications or pricing insurance cover. Alongside the regulatory risks inherent in such AI failures, lapses in ethics and control can lead to severe reputational damage.
So why can ethics break down? The ‘black box’ inner workings of many AI models can make them difficult, though by no means impossible, to monitor and control. The fault-lines also include limited understanding of how the different forms of AI function and the risks this opens up. Pressure to make use cases work and deliver returns can heighten the likelihood that material risks arise.
How then can you establish and embed ethics in use cases, operations and deployment of AI?
We believe that these people considerations should be at the core of an AI-ready target operating model. As you look to deliver the required people capabilities, five priorities stand out.
AI is more likely to change work than take it away, at least for most industries today. The potential benefits include the chance to pass a lot of the drudgery and number crunching to the machines so your employees can concentrate on deploying their skills and creating real value. In turn, AI can augment capabilities and allow employees and employers to reimagine their roles.
It's important to articulate and communicate these positives in a realistic and measured way as you look to reassure employees and encourage them to embrace change. Accepting that there are risks to AI will build trust while increasing the effectiveness of your messages with a focus on your organization’s intention behind its use of AI and the benefits for employees individually.
Defining and sharing your organization’s AI strategy and sourcing employee sentiment are solid ways to engage the workforce in change so your program becomes both a strategy and cultural shift alongside a technology shift.
Ensure your employees have a say in the design and implementation of your target operating model from the outset. The more employees contribute, the more they will feel like they ‘own’ the technology and can benefit from its implementation. Key priorities include involving your employees in the AI-enhanced/reimagined design of roles where possible and appropriate, as well as the development of customized training.
The big risk is saying nothing or limiting communication to when the technology is about to be rolled out. Fear of the unknown will grow in this vacuum and erode trust. A lack of buy-in from talented employees risks attrition, threatening the successful implementation of an AI strategy and/or operating model.
Building your AI capability will require new skills and experience across, and at all levels of the organization. This includes senior leaders who play a critical role in embedding AI, as well as upskilling for conventional technical roles, such as product owners. It may also involve developing new teams in data science, machine learning, deep learning, prompt engineering and ethics.
But upskilling and reskilling should be organization-wide rather than confined to technical specialists. Moreover, they shouldn’t just focus on the technicalities of AI, but also how to apply the outputs in an employee’s day-to-day work. What new possibilities are opened up? How can we use the models with confidence, knowing the limitations and areas to sense-check and validate most closely?
This step-up in capabilities should be continuous. As you look to move from foundational to transformational deployment of AI, it’s important to foster a culture of constant iteration and curiosity. This allows you to keep evolving the AI applications and use cases as you look to stay relevant and ahead of the game.
Trust is the new currency of AI. Customers need to be sure that you’re protecting their data and using it for their benefit. Your Board and employees need to be sure that the risks are being managed appropriately, to say nothing of the need to satisfy regulators who will have even more stringent tests to meet. This is why ethical and responsible use of AI is so critical and should be one of the foundational capabilities in your target operating model.
The starting point is clear and agreed principles should be supported by effective governance, oversight and accountability. But principles and frameworks aren’t enough on their own. The real test of AI maturity is the extent to which ethics are embedded and aligned with larger ESG decision-making, and so the mindset and day-to-day decision making of your organization. Key questions within your organization include: Do our use cases align with our values and purpose? How can we do the right thing by ensuring our internal and external stakeholders are best served ethically by the use of AI in our products and services? Do we have the right level of transparency to both comply and lead our peers on AI performance and ethics?
Building confidence in the AI tools and their outputs is clearly crucial, but so is trusting your people to apply critical thinking and remain inquisitive. So it’s important to build mechanisms such as training, piloting and testing that can earn stakeholder trust. Controls such as sense checking and correcting for any misinterpretations and mistakes by the AI tools are also key.
We’re working with businesses across all sectors to help them deliver the transformational potential of AI. Talk to us if you’d like to know more.
Share: