Logo

Healthcare and digital transformation: data should make healthcare more people-centred

| 28 March 2026 | Others

This short article is a reflection of my thoughts running free. It is not a prediction, nor does it represent my employer’s policy or strategy.

Healthcare is people work. Caregivers and nurses do this work because they want to work with people. Despite attempts to reduce the administrative burden in the care process, we are failing to reduce it structurally, whether through ‘dumb’ or ‘smart’ automation. 

Verpleegkundige praat met patient. De patient draagt een monitor om zijn arm die de AI-agent voorstelt.

From practical experience, I know that the quality of the data leaves much to be desired. The reason is that too much is recorded, often without a clear purpose, without proper guidance or not structured enough. Furthermore, the use of that data leaves much to be desired.

The promise of AI in healthcare is a much-discussed topic. Staff shortages and high workloads for healthcare providers, alongside stress among care recipients, are a reality, and the balance is only becoming more skewed with demographic changes. The ratio of workers to pensioners is decreasing, and even if the demand for care remains the same or decreases, the problem will only grow.

Digital transformation must therefore take over administrative tasks and provide support for routine checks and monitoring, so that healthcare providers can devote more time, or indeed all their time, to what care actually requires: a listening ear and social or professional support during a care or recovery process. Human attention promotes healing or well-being; we are social beings.

How should AI play a role in this transformation? Below are a few thoughts, regardless of whether AI is already capable of this and exactly how it would work technically. It is a thought experiment.

A personal AI agent

This is the key idea on which the other points are based. Instead of scattering our health data across every healthcare organisation we come into contact with, data that is difficult to exchange or access, we should have a personal AI agent to manage our data.

For those not familiar with this subject on a daily basis: an AI agent is a piece of software, or a set of collaborating software, that operates autonomously and fulfils a specified objective, whilst being able to adapt to a changing context. A bit like how people work too, hence the term artificial intelligence.

Our health data will still be recorded (generated) and stored within healthcare organisations. In principle, we ‘store’ our data there, but the personal AI agent ensures that this data remains consistent, even if new data is added elsewhere. The AI agent has the complete overview and is the point of contact for any queries. This applies to healthcare providers too, rather than them consulting their local copy, which now is never complete or up to date.

This means that our personal agent records and collects our data, monitors our care process, alerts us if we need to call in a care provider or alerts a care provider, and summarises our current condition for ourselves, family, carers, nurses or doctors as input for a conversation. 

The AI agent works alongside the patient or client, without us having to think about it all day long. It is a background process that we only become aware of when we receive an alert or have an appointment with a healthcare professional.

Support for triage and decision-making.

Making decisions independently or jointly within a care pathway is important for every person’s self-determination. AI should help us organise the options available to us, based on our own data. The primary focus remains on the interaction between patient and doctor, or client and/or family and carers.

Predicting outcomes and exploring options based on pre-defined personal wishes, again as input for a discussion or for transfer to another treatment location, would put many people in a much stronger position when faced with difficult decisions. Decisions whose consequences they cannot fully foresee.

Accelerating research and the implementation of results

Developments in medical technology are advancing rapidly, but the time between the start of a study and the implementation of the results is long. This is caused by due diligence requirements and scientific validation requirements. Time is a key ingredient for better-quality results; taking the time to reflect is and remains important. 

However, there are many administrative red tape hurdles, and this leads to bottlenecks in current processes that can be eliminated through smarter automation and AI support. The biggest delaying factor is locating and collating the necessary data.

This is where personal AI agents come into play. For research purposes, it will also be possible to instruct these AI agents to collect new data, whilst maintaining privacy. These technologies already exist; what is lacking is the appropriate legal framework and the public debate on the matter.

Only necessary data is the crux of the matter

Rethinking the necessity of recording and collecting healthcare data will also lead to a reduction in the administrative burden involved in collecting such data and to a reduction in the recording of unnecessary data. We currently record a great deal of useless data.

AI, too, struggles to find and use relevant data in an ocean of irrelevant data. In the worst-case scenario, this can even lead to the algorithm malfunctioning.

A 180-degree shift in perspective

We must make much greater use of digital tools to generate our health data and become far less reliant on what a doctor, carer or nurse records in archaic systems developed in the last century. We must reverse the IT and organisational paradigm with regard to data management.

The doctor, nurse or healthcare provider no longer records the data on our behalf, but only contributes new data when necessary. Data is generated and organised as much as possible by personal AI agents using measurements and equipment, based on standards prescribed by legal frameworks.

The legal frameworks are operationalised in open, internationally recognised systems for medical data definitions, so that interoperability and reusability are guaranteed upon recording and dependence on software suppliers is limited. Keeping the data up to date is therefore an automated process, involving as few manual actions as possible.

Healthcare providers are users of the AI agent when access to the data is necessary. This is how it is currently organised: a healthcare provider has access to our data for the duration of a treatment relationship.

Researchers can query the collective of AI agents without gaining access to the data itself. The AI agents report on the quality of the data on which the results, delivered to researchers, are based. Existing laws and regulations, as well as the scientific process itself, already govern the initiation and conduct of scientific research strictly. The only thing that changes for the better is the accessibility and availability of validated data.

Our own AI agent collects our data, organises it and curates it. Unnecessary data is simply removed. The AI agent does not need to physically gather our data, but has access to it, regardless of where it is stored. This requires robust laws and regulations, as well as AI-driven monitoring by regulators.

Dynamic data management

AI will not solve the current data problem in healthcare on its own, but it is a powerful tool that we can deploy in the data collection and management process by asking us, the patient, client or healthcare provider, the right questions at the right time. 

Questions generated based on the combination of data we already generate ourselves. For example, using watches or measuring devices provided by a healthcare institution, through self-reporting, laboratory tests commissioned by a doctor, summaries of conversations with care providers, and the coding of this data using the correct medical ontology, validated by a doctor or nurse.

Continuing down the path of collecting ever more data by deploying healthcare providers as administrative staff is a recipe for disaster. The amount of noise in the data only increases rather than decreases, and this does not enhance the quality of AI support in the medical process. Rather, there is a risk that low-quality data will lead to more errors.

The image accompanying this article was generated using Microsoft Copilot, based on my prompt. Sadly, Copilot has a limited understanding of diversity when generating images.



You can reach me on