top of page

ChatGPT Health: Personalised AI in Healthcare — Promise, Peril, and Practical Guardrails??

  • 9 hours ago
  • 5 min read

OpenAI’s new private health experience brings personal records and wearable data into conversational AI. It could change self‑management and administrative workflows — if we insist on the right safeguards.


OpenAI’s announcement of ChatGPT Health marks a significant inflection point: conversational AI that can ingest medical records and consumer health data to inform personalised health conversations. The feature promises isolated memory, stronger encryption, and integrations with platforms such as Apple Health, popular fitness apps, and record‑aggregators. For clinicians, health‑tech leaders and patients, the question is not whether this will matter — it already does — but how we shape it so the benefits scale without amplifying harm.


Do you recognise the style of what has been written so far?


I used Co-Pilot (another conversational AI) and this is what it would have me say. It posited that this new capability is good but needs better safeguards. This takes as a given that ‘for-profit’ consumer-oriented enterprise should provide this kind of utility, and that we just need to regulate it well. Co-Pilot took the sum of largely American online content and synthesized a position that is the sum of cultures, ideologies and health and care models quite different to ours.


Here is what I would have written, if left to my own devices.


OpenAI’s new private health experience brings personal records and wearable data into conversational AI. There are lessons for personalised health and care here – the technology is getting there, but we need to rethink the social model for it to thrive in a consensual, transparent and safe manner.


I have the privilege of fifteen years of experience in digital health and care service development and delivery and so alarm bells ring when I saw Copilot’s output. I could take a step back, think critically and form my own, qualified opinion. If you asked me to do the same for a cancer risk deliberation, I would have to just go with the AI offer, which has been trained to be extremely plausible to me as a lay person.


The upside of the new capability is straightforward and tangible. A digital assistant powered by this kind of AI capability can see medication lists, recent lab results and wearable trends and move beyond generic guidance to actionable, contextual suggestions. It could then help me transact or administrate actions and handle the complexity of the processes I should go through. This could be great for prevention, engagement and inclusion, helping all people to self-manage more effectively and prepare them better when they need to work with health and care professionals.


But the risks are real and complex. Privacy is not a single technical toggle. Strong encryption and isolated memory are necessary but not sufficient. Consent (and revocation), data provenance and storage, and third‑party integrations all determine whether a user truly controls their data. Users often underestimate the implications of sharing or importing records.  Providing simple user experience often requires tradeoffs, including some obfuscation of exactly what is happening with the data involved. As a marketplace evolves around these new digital assistants, it would become easy for data to leak as the user takes the easy path to connect services up and streamline their lives.


Clinical safety is another hard boundary. Models can misinterpret notes, hallucinate details, or miss clinical nuance. Regulatory and liability questions remain unsettled. Who bears responsibility when an AI‑driven suggestion contributes to a poor outcome — the platform, the data integrator, the clinician, or the health system that adopted the tool?


So what should responsible organisations do now?


Well, this is what CoPilot thinks:


First, treat these assistants as clinical‑adjunct tools, not replacements. Design workflows that keep clinicians in the loop for diagnostic or treatment decisions and use AI to automate low‑risk, high‑value tasks. Second, insist on transparent, granular consent and easy revocation. Users must know exactly what data is used, for what purpose, and how to withdraw access. Third, require provenance and explainability: every recommendation should link back to the data and logic that produced it, enabling clinicians to validate and correct outputs.


It's hard to disagree with any of these suggestions. They do of course work on the premise that we have to adopt or accommodate consumer-oriented AI tools. If it were me, I would be asking different questions:


Can we learn from some of the principles at play?


The core concept of using the ‘activities of daily living’ data to generate automated, personalised recommendations is sound.


A dialogue-based interaction could be extremely useful for people with lower digital literacy or access. There is no reason a digital assistant couldn’t communicate via SMS texts, for example.


Synthesising lots of complex information from exchanges with health and care professionals to allow people to understand and take more ownership of their care is a fundamental good.


Where do we start?


People race to diagnostics with these sorts of capabilities, but there are some areas of health and care service delivery that may be better starting points for early exploration. I would look for lower risk, less acute, less medical use cases. The health and care system is hindered by significant inertia created by much more simple, logistical, data sharing and data volume issues.


For example, understanding what ‘normal’ looks like for a person is a perennial problem for social workers, care assistants, paramedics, occupational therapists, hospital staff, among many others. Understanding how active, social and engaged someone is over many years could be important baseline information to help a professional handle risk and tailor care more effectively.


This could be achieved using the new digital capabilities, but we would need a more consensual, transparent and safe approach to the collection of personal data and insight generation if we want people, organisations and society to trust it. So, think about where the data will be stored in this model. At a bare minimum we need all the data to be UK based to adhere to regulatory requirements. We should go beyond this, considering that this is now the aggregation of a citizen’s day-to-day living data and we start to move beyond the argument that this is a medical record.


This means we cannot persist with the cultural and regulatory assumption that implied consent is enough for the data to be stored, shared and used. Ideally any initial development in this area would work based on explicit consent, and full revocation capability and this would be built into the software in question or be provided by a suitable person-held record capability.


What next?

Chaloner Chute Chief Technology Officer

DHI is active in kind of work I describe above. Stay tuned across our channels to consider these kinds of privacy-preserving and socially focused models as they are evidenced by our portfolio of integrated care innovation projects.


bottom of page