1. Make sure the basics are in place
  2. Focus on the outcome, not the solution
  3. Think about scaling and implementing from the start
  4. Be transparent about the what and why of your AI approach
  5. Governance and ethics as an enabler, not an afterthought


1. Make sure the basics are in place

It sounds boring, but in too many trusts across the NHS, staff still have woeful experiences with accessing Wi-Fi, having to use multiple log-ins and battling with IT and digital services that simply aren't designed with them in mind.

As a board you'll hear about these challenges, see staff feedback scores and comments or perhaps even observe them when on a ward. If your staff can't easily log-in to essential systems and do the jobs they need, then the desire for and engagement with artificial intelligence (AI) as a new promised solution is going to be low. And, if your staff experiences are like this, it's more than likely your data isn't particularly reliable, or structured, or perhaps even very accessible - none of which is a strong basis for using AI.

Ask yourself:

  • As a board, how close are you to the experience of your users (patients and staff)? What are their digital experiences like? Can patients and staff do the things they need to do digitally or are there barriers? Can everyone log-in to the systems they need to in a timely way?
  • As a board, where are your best foundations for AI? How can you be sure you have the necessary basics in place? Where are your most reliable datasets, where are your known weaknesses?

Leeds Teaching Hospitals NHS Foundation Trust are piloting AI in innovative ways across the trust, from using AI to identify those at risk of developing atrial fibrillation to early detection of lung cancer, but central to their digital strategy is delivering the IT basics well and ensuring the infrastructure is reliable, safe and secure.

 

2. Focus on the outcome, not the solution

There are many promises and proposed solutions with AI. As a board you will want to assure yourselves that the investments the trust is making are the right ones, tackling the biggest challenges and delivering the biggest benefits. Don't expect to be able to do this if you start with a solution. It will never end well because it is too easy to lose sight of the actual problem you are trying to solve, which will be as much culture, process and ways of working as it is technology.

Instead, be targeted and practical. What are the biggest challenges that you are facing right now, where might you be able to make an impact with good use of AI, supported by a multidisciplinary team of clinicians, technologists and operations and change leads?

When you identify that area or that priority outcome, ensure you can start small, test and obtain real learnings to inform where you go next. This will enable you to assess the impact you are having rather than continuing regardless. As a board member, you will be constantly balancing priorities so seeing evidence of benefits when they are expected or even better - early, will help you and fellow board members take difficult prioritisation decisions. Keep the focus on the outcome regardless of whether you are developing something in house or from a supplier - being told a solution can solve problem 'X' is great, but what evidence can be built quickly and iteratively to provide assurance to the board that it can work in your trust too?

Ask yourself

  • As a board, how do you know you are attempting to solve a genuine problem? Can you explain the problem this is going to solve for your patients or staff?

Lancashire and South Cumbria NHS Foundation Trust have taken an improvement-led approach to using AI. They are working with their youth voices panel to co-design solutions and starting small in areas where they’ll have the most impact. For example, Ambient Dictation is being rolled out to support staff in their conversations. This won't immediately replace other methods, but will run alongside the clinic, taking a small and simple approach to testing the impact.

 

3. Think about scaling and implementing from the start

This one might feel a bit odd, why plan for scaling and implementing, when we've just said that starting small is how you set up for success? Because if there is no viable plan to scale your AI experiment then why is it being started?

NHS trusts are busier than they have ever been and the demands on staff are constantly high so 'another transformation project', AI or not, shouldn't be taken on lightly. Too many trusts have huge backlogs, with programmes and projects that don't really have the teams they need to succeed. If you are going to implement AI to help with a real trust problem, how are you going to take that past an initial pilot phase? Boards will need to be assured that consideration has been made for what this AI project will replace on the roadmap, that it will have the relevant resources to scale and where it will sit in the priority of things to do.

Similarly, you will want to know that it can be shut down if needed because it's not sustainable or desirable to have pockets of AI experimentation across your organisation without an intentional approach, clear guidance and appropriate guardrails. Pockets of AI, or over-optimising in small areas can have negative wider impacts.


Ask yourself

  • As a board, how important is this to our trust right now? How can we dedicate the time, money and people it needs to unlock the benefits we expect? Why are we prioritising this over other projects/programmes/initiatives in our backlog?


4. Be transparent about the what and why of your AI approach

Just as we learnt with electronic patient records (EPRs), a lack of transparency around the what and the why of new technology is a recipe for failure. When it comes to AI, the board will need to be confident in communicating what the AI is doing and why it is doing it. Thinking of this in a user focused way is a good test to see if you can do that.

Say for example your trust is implementing technology to support medical imaging decision making, can the board explain to a patient what the AI is doing with their information? Does your board have sufficient understanding and assurance about how patient data is being used across your AI applications?

Can you do the same for why you are using AI? Can you communicate clearly the benefits for the trust and the patient - if not, then you might have fallen into the trap of implementing a solution without a problem.

 

Ask yourself

  • As a board, can you clearly explain to staff and patients the what and why of your AI approach?

Kent Community Health NHS Foundation Trust are making AI implementation more accessible to their patients through open communication. Their trust magazine features an article, on page 13, on their clinical AI pilot, explaining how it frees up clinician time, improves care for children and families, and reassures patients about data security.

 

5. Governance and ethics as an enabler, not an afterthought

There are a number of shocking stories about built in bias in AI and algorithms. These aren't reasons to not use AI, but they are reminders of the importance of strong board assurance and approaching a solution with the right amount of challenge and consideration.

Building on the point above, being able to communicate to patients and staff what AI is doing with their information is just as important as being able to communicate why the AI being used is suitable. Are the data sets that have been used to train and develop models appropriate for the intended use?

An important consideration for boards as part of governance and ethics conversations will be how the population benefits from its use. For example, Computer Aided Diagnosis systems built on datasets where groups are underrepresented have been found to return lower accuracy results. Has your board had this discussion? Will everyone benefit from your AI project? Could it exacerbate existing inequalities or help to address them? How will you know?

Although AI is not new to many trusts, it is an area which is evolving quickly. Good models of governance will need to keep pace with this change. This will need to happen at board level and throughout the organisation to the coal face of delivery.

Within teams, governance and assurance (e.g. clinical safety, information security, ethics) should be built in, not a wrapper to the side expecting to catch things going wrong. It will need to work side by side, day by day as you test and learn your way through implementation. As a board, are you getting assurance that this is happening, that the teams working on AI are getting the support and resources that they need and are working in the open to provide the visibility needed as it evolves so there are no surprises?

At the most senior level, boards should govern AI by:

  • Focusing teams on outcomes rather than prescribing solutions, and ensure those outcomes are aligned to overall trust strategy.
  • Explaining the principles they would want any AI deployment to follow e.g. ethically sound, clinically safe, designed around users.
  • Prioritising the benefits to be released and understanding which groups of the health population will be impacted and how to ensure equity considerations are baked in from the outset.
  • Ensuring they are bringing the right AI expertise into their board conversations.
  • Holding the space for meaningful discussions on risk. Yes there are risks to AI but there are always risks of not acting as well. The trade-offs should be explored.
  • Helping communicate intent and activity to the wider world.
  • Engaging on an ongoing basis, encouraging a test and learn approach with iterative approvals and governance processes.
  • Most importantly - board members should see the technology, visit the users and walk the wards to see things work for themselves and ask questions directly of the team rather than let information filter up through layers of reports.

There are great resources out there to help, and NHS England Transformation Directorate's Artificial Intelligence page is an excellent place to navigate them. But reading them alone is not enough, figuring out how you apply them in practice is what will best support your teams.


Ask yourself

  • As a board, how are we ensuring the governance around AI is protecting patients, staff and the trust, enabling our teams to have the biggest impact and giving us the assurance we need?
  • What steps can the board take to ensure resources are in place to mitigate risks, promote equity and ensure that our AI systems do not disproportionately affect marginalised groups?

Somerset Foundation Trust have published their AI policy which is designed to provide guidelines for fostering innovation through the use of AI, whilst also considering ethical and legal implications.

Next