30% of new global data is coming from healthcare systems – new tools are needed to tackle the deluge.
In her address at the GIANT Health Event in May 2021, Bayer’s Sophie Marie Park estimated that the volume of data within the NHS is doubling every 90 days and that 30% of all the data generated on the planet is coming from healthcare systems.
Whether this is because of greater use of technology due to the COVID-19 pandemic, trends that were already taking place, or a bit of both, big data has truly arrived in healthcare.
But what will happen to all this data? In many situations, clinical teams are already overwhelmed as it is. They can’t decide what to do with the data and are in no position to analyse it themselves. An ethical question arises. A lot of data is being collected and it could contain information that will help save lives. However, the capacity isn’t there to meaningfully use much of this data.
Already we’ve thrown the ‘big data’ buzzword into the conversation, so apologies as we hit you with another – artificial intelligence (AI). As in so many other technology sectors, AI and machine learning are considered to be the future. Undoubtedly, AI systems will provide a whole range of solutions when it comes to coping with the data deluge healthcare systems are currently experiencing.
AI tackles the health data deluge
AI will help health services sift through data collected from large patient groups and identify subgroups of patients within the population who are at risk of certain conditions due to demographic or environmental factors. Health services can then shift planning and funding to focus provision on the parts of the population that need it.
Healthcare providers dealing with smaller, more specific patient groups will also use AI in innovative ways. An example that is close to home for Loft is Eye2Gene, a project that we are watching closely. Eye2Gene’s AI is learning to identify genetic conditions that cause various forms of inherited retinal dystrophy (IRD) by analysing retinal scans from affected patients. The same AI can then analyse scans sent in from remote parts of the world that don’t have specialists in genetic eye conditions and suggest diagnoses. Clinicians caring for the patients can then implement the best treatments.
In digital health, we’re only beginning to imagine the ways AI will benefit health services and patients. Technology firms in the space are scrambling to include AI as part of their offer, and here at Loft we are no different. However, we do think that great care must be taken as AI is adopted and in some cases what appear to be wonderful solutions may have unexpected consequences.
A strategy for AI in digital health
Loft recently joined the FutureNHS Collaboration Platform – a forum where specialist parties within the health service and digital health sector are discussing various technology strategies. One of the strategies the NHS is consulting the forum on is its National Strategy for AI in Health and Adult Social Care. The NHS wants to develop an approach that is effective, and ethical.
Like many other developers, at Loft we take a patient-centred approach to the design of digital health applications. We think that taking a user centric approach to the national strategy on AI will inform the discussion a great deal, both in terms of efficacy and ethics.
For a moment, let’s consider what has happened in other areas where AI and algorithms have been applied. Outcomes can be… unexpected. A rather extreme example is Tay.ai, the Microsoft chat bot that quickly learned to spout racist rhetoric soon after being released onto Twitter. We also know that facial recognition AI has useful applications in security, but is increasingly seen as a threat to our basic rights. Many people – patients included – are concerned about how AI and algorithms are applied to social media users in order to market both products and political ideas.
While we realise that AI developed to analyse patient health records will have very different objectives to the technologies mentioned above, we shouldn’t forget that the patient is at the centre of our design and development approach. One of the key things is that users always know what’s going on around them. When data is being collected and when it is being analysed by an AI system, will patients be made aware that they are being assessed? Will they understand how and why this is occuring?
There are knotty issues around this. Our experience developing the MyEyeSite application for patients with rare eye diseases, alongside UCL and Moorfields Eye Hospital, is that many patients are sceptical, or may even have a negative reaction, when they’re not sure what’s going on. They feel exposed, vulnerable and that their privacy has been invaded. Conversely, they feel like they are helping if they understand the rationale behind the use of their data.
The importance of transparent AI
Part of being patient-centred is that users always know what’s going on when they’re interacting with a User Interface (UI). At Loft we think a responsible and transparent approach needs to be taken with AI. Patients need to be conscious of the fact that their records may be analysed by AI or machine learning systems, why this happens and what the outcomes could be.
Anyone who’s watched Line of Duty will know that police officers have a fundamental duty to protect life. In the medical profession, physicians take the Hippocratic Oath, which outlines the ethical standards they will uphold. At Loft, we’re keen to explore a framework of ethical standards that could apply in digital health – and perhaps beyond.
In so many areas, healthcare technology developers are working with vulnerable groups of patients. Extra care needs to be taken that our desire to progress certain patients aren’t marginalised – either through lack of access to the technology, or by the application of the technology itself. Fairness and equality should be designed into every platform.
Maybe, design thinking in digital health needs to go beyond identifying and solving the problems faced by the health service, clinicians and patients. Maybe design thinking should also be about taking the time to consider whether the solution will be doing the right thing – for whole patient groups and individuals.
It is fundamental for us that digital health technology should be used to help people and we continue to believe that the great strength of the internet is that it puts everyone on a level playing field. Those values need to be upheld. Care must be taken as AI systems are designed to ensure that they empower patients and don’t accidentally disempower them, and also that algorithmic biases must also be be identified and mitigated.
These are the views we’ll be putting forward as we contribute to the discussions taking place on the NHSFuture Collaboration Platform. We’d love to know what you think.