Establishing trust in healthcare information

Catherine Richards Golini

External validation of quality has never been more important for healthcare content creators. As generative AI increasingly powers healthcare queries and browser summaries, and media influencers spread medical misinformation unchecked, qualified creators require new ways to differentiate their content. This article explores how robust accreditation methods – such as the PIF TICK – can offer assurances of trust and quality in healthcare information.

Trust in research information and the role of AI

The rise of generative artificial intelligence (AI) has transformed how patients and clinicians access health content, creating opportunities to both improve and undermine trust in healthcare information. According to the 2024 Deloitte Health Care Consumer Survey, direct, intentional use of generative AI for health hovers around 37–40% (for US consumers), although passive exposure is much higher.1 As a result, consumers are able to access healthcare information more than ever, through search engines, social media and now AI-generated summaries. But can we trust this information? Can we trust the accuracy? Can we trust the source? Can we trust that the information is relevant? Can we trust the messenger? (I’m looking at you, TikTok influencer.)

Generative AI platforms such as ChatGPT and Google’s AMIE now demonstrate diagnostic accuracy rivalling that of non-specialist clinicians, and may even outperform specialists across a raft of diagnostic2–4 and communication5 measures. This progress is remarkable, but it comes with concerns related to relevance, source attribution and trust. Certainly, AI tools can produce well-written content, although, ask it to write for ‘a reading age of 11’ without specifying that the content is for adults and you’re likely to get superhero and dinosaur metaphors.

Even people not actively using AI are nonetheless exposed to AI content, often unknowingly. Our current concern (and this time next year we’ll have other current concerns, for sure) is generative AI being used in place of an internet search. AI-generated browser summaries infrequently provide adequate sources, and when they do, few users click through to the source sites; many websites have reported a drop in traffic as a consequence.6

Despite its pervasiveness, patients and clinicians are cautious about AI use in medicine, citing trust and transparency as their greatest concerns.7 However, I’m confident that the steady rise in the use of generative AI for healthcare queries will become a flood when trust and reliability of sources can be established.

What makes healthcare information trusted?

If trustworthy healthcare information is the goal, what does it look like, and how can we decide where to place our trust? At its core, trusted healthcare information is:

  • comprehensible: written in plain language, free from jargon, euphemism and complex structures
  • evidence-based: clinically accurate and up to date
  • relevant: what the reader wants to know; culturally appropriate and inclusive
  • actionable: supportive and presents risks and benefits transparently.

I am indebted to the patients I interview for helping me in my efforts to achieve this, but if had to choose the single most important element of quality patient information it would be, without a doubt, plain language. If you cannot comprehend the meanings of the words or are lost in a complex sentence, the message won’t be transmitted at all, however relevant or accurate it may be.

When I joined Karger Publishers as Healthcare Publications Editor on the Fast Facts series in 2022, we had just a handful of patient booklets and leaflets. The growth potential was huge but so was the need to improve our approach to their development. As a respected medical publisher, we knew how to develop clinically accurate and up-to-date information, and the books were generally free from euphemism. But they were not free of complex structures or passives. Sentences could be long with too much (unnecessary?) information. These resources were usually written by clinicians, and it could be a struggle to simplify the text when the clinician believed that in doing so, the content became less clinically ‘correct’. And likely for the same reason, our patient resources clearly prioritized the biomedical, with support for the emotional and mental health impact of living with a disease largely absent.

It was also the case that there was no systematic, standardized process for involving users in resource development. In short, our patient resources were great examples of quality healthcare information – as it once was. Many changes were required, but I also knew that I needed the leverage of accreditation to bring them about.

Achieving trust

Having recognized the need to bolster the quality of our patient portfolio – and, consequently, public trust in our content as a source of quality information – I proposed that we work towards gaining Patient Information Forum (PIF) TICK certification as a trusted information creator.

The PIF TICK is the UK’s only independent trust mark for quality health information and, globally, is probably the most robust general-use independent mark. Following its establishment in 2020, those certified as trusted information creators now include a growing number of pharma companies, communications agencies, hospitals, freelance writers and video content creators, in addition to the core membership of founding organizations.

Pursuing PIF TICK recognition represents more than external validation. The process and annual assessment have provided structure for internal improvement and – I don’t mind admitting – is leverage when pushing for changes to our processes. The framework is comprehensive: almost 40 criteria and sub-criteria that encompass the entire life cycle of patient information development, from conception through dissemination and impact evaluation. Quality is a habit and the PIF TICK is not a pass–fail scheme.

PIF seems to be aware that its framework needs to adapt to the varying business models and characteristics of its members, and to different information formats and channels. How certain criteria can be applied to every format is not always evident, nor is it always feasible. Although the core principles of quality patient information – plain language, evidence-based content, user involvement, accessibility – remain relevant no matter what the format or channel, how these principles are applied can differ.

Fast Facts for Patients were accredited for the first time in 2023. I was informed in August this year, at our third accreditation meeting, that we can now expect ‘spot checks’ going forward. I took that as a sign that many of our development processes are now firmly established. I also took away with me tasks for the next 12 months: areas that we still need to work on. These relate particularly to measuring impact – always a challenge for a publisher with resources funded by industry partners.

Perhaps the change I value the most following accreditation is that patient resources are understood better and taken more seriously by many of my colleagues. Convincing a traditional medical publisher that patient education is as valuable as medical education for clinicians and healthcare professionals is quite an achievement.

Looking to the future: universal health literacy

Although initiatives such as the PIF TICK and German InfoCure aim to establish trust in information sources, accreditation alone cannot solve the trust problem. Information will continue to flow, and bans or restrictions won’t address the underlying gaps. The only sustainable response to quality and trust in a world where all information will be driven by AI is, in my view, universal health literacy.

The concept of Planetary and One Health Literacy calls for a more comprehensive understanding of health to reflect the strong interconnectedness of human health and other ecosystems across all scales, to sustain and strengthen all.7,8 Our understanding of health literacy must evolve to recognize that everyone – individuals, communities, educators, governments and policy-makers – needs the competencies to critically appraise information, make sustainable and health-promoting decisions, and act across human and ecological systems.

Our medical communications community can act right now by recognizing that every publication is both information and education. Plain language summaries can do more than just simplify research. At worst, a summary is an automatic rewording of an abstract; at best, a plain language summary could be an opportunity to teach readers about research and give them the skills to think about it critically. A shared commitment from medical and scientific publishers to connecting the findings of published health research to the broader determinants of health, their environmental impacts and community contexts would be an enormous step in the right direction.

References

  1. Wolters Kluwer Health. Generative AI in Healthcare: Gaining Consumer Trust—Survey Executive Summary. 2023. Available from: https://assets.contenthub.wolterskluwer.com/api/public/content/2110015-generative-ai-executive-summary-dbf05cc8de?v=5713b735 (Accessed 4 December 2025).
  2. Takita H, Kabata D, Walston SL et al. A systematic review and meta-analysis of diagnostic performance comparison between generative AI and physicians. npj Digit Med 2025;8:175. https://doi.org/10.1038/s41746-025-01543-z.
  3. McDuff D, Schaekermann M, Tu T et al. Towards accurate differential diagnosis with large language models. Nature 2025;642:451–7. https://doi.org/10.1038/s41586-025-08869-4.
  4. Tu T, Schaekermann M, Palepu A et al. Towards conversational diagnostic artificial intelligence. Nature 2025;642:442–50. https://doi.org/10.1038/s41586-025-08866-7.
  5. Shaberman A, Richards Golini C, Rockell K et al. PLSing the crowd? Multistakeholder assessment of human- and AI-generated plain language summaries [abstract]. Curr Med Res Opin 2025;41(suppl 1):S48. https://doi.org/10.1080/03007995.2025.2482337.
  6. Whitwam R. Surprising no one, new research says AI Overviews cause massive drop in search clicks. 2025. https://arstechnica.com/ai/2025/07/research-shows-google-ai-overviews-reduce-website-clicks-by-almost-half/ (Accessed 7 December 2025).
  7. Jochem C, von Sommoggy J, Hornidge AK, Schwienhorst-Stich EM, Apfelbacher C. Planetary health literacy as an educational goal contributing to healthy living on a healthy planet. Front Med (Lausanne) 2024;11:1464878. https://doi.org/10.3389/fmed.2024.1464878.
  8. Jochem C, Doyle G, Sørensen K, Kickbusch I, Rüegg S, De Gani SM. A call for a shared future vision for Planetary and One Health Literacy. Health Promot Int 2025;40(6):daaf200. https://doi.org/10.1093/heapro/daaf200.

The views expressed in this blog post are those of the author and do not necessarily reflect those of Open Pharma or its Members and Supporters.

Catherine Richards Golini is Patient Resource Manager at Karger Publishers.