The market launch of ChatGPT in 2022 was a milestone: suddenly, the power of so-called generative AI models – GenAI for short – became tangible for everyone. What had previously been considered a pipe dream found its way into our everyday lives overnight. These systems, based on large language models and trained using hundreds of millions of data sets – from cat pictures to rocket plans to legal treatises – have fundamentally changed the way we search for and process information.
And the replacement of «Dr. Google» by «Dr. ChatGPT» is particularly evident: More and more people are placing more or less blind trust in answers that are generated in a fraction of a second. At the same time, experts in various fields are also making use of these technologies: in some countries, for example, over 40% of doctors already use GenAI to assist them in researching and reviewing medical cases.
Why generic models reach their limits in everyday clinical practice
The excitement surrounding generative artificial intelligence (GenAI) and generic language models (generic LLMs) has long since reached the Swiss healthcare sector. At first glance, applications based on such language models appear to be a solution to the enormous administrative burden in everyday clinical practice: automatically generated medical reports, medical histories at the touch of a button, less paperwork. In addition, huge amounts of data can be analyzed, compared, and condensed in a very short time – and users receive an immediate assessment.
But this is precisely where the challenge lies: Has the solution really understood the original question? Are the results based on reliable sources? And – perhaps the most crucial question – do these sources even exist? If not, we are talking about so-called «hallucinations»: texts or facts that sound convincing but are in fact completely incorrect.
General large language models are therefore trained to generate content that is as plausible as possible – but not to facilitate medical documentation. In everyday clinical practice, accuracy, consistency, and traceability are key. Even minor inaccuracies can have consequences for diagnoses, therapies, or billing – for example, «blood thinner» can become «blood thickener» or «dyslipidemia» can become «dislepidemia». And this is precisely where the following must not be forgotten: documentation is not just an administrative duty, but an integral part of the medical thought process. And the medical profession bears full responsibility for the outcome.
AI for healthcare as a sustainable alternative
However, this does not mean that artificial intelligence should not play a role in healthcare. The key difference lies between general language models and specialized large language models (LLMs) that are specifically designed for medical applications. While general language models can be useful for general purposes, specialized language models provide the precision and structure that are crucial for the work of medical professionals. Medicine is a highly specialized field – if your ears hurt, you go to an ENT specialist; if you have heart problems, you go to a cardiologist. And to put it bluntly, using generic language models in healthcare is like a surgeon performing surgery with a Swiss Army knife.
Voicepoint Xenon®: Precision instead of hallucination
Innovation is at the heart of everything we do at Voicepoint. Our goal is to combine state-of-the-art speech and AI technology with real-world scenarios in healthcare. For years, we have been observing a growing field that originated in the US: Ambient Documentation Technology (ADT) – the combination of GenAI and doctor-patient interaction. The benefits are obvious: a 20-minute consultation with a doctor can be analyzed, transcribed, and summarized in less than 15 seconds. No more tedious typing, no additional paperwork – instead, there is more time for what really matters: the well-being of patients.
But as simple as it may seem at first glance, implementation in practice is complex. GenAI carries risks: what if a system suddenly adds information that was never mentioned – or even «invents» medications? It is precisely these hallucinations that undermine trust – and make medical applications impossible without protective mechanisms.
At Voicepoint, we have risen to this challenge. We do not use a generic GenAI model, but rather solutions that have been developed specifically for the healthcare sector. We rely on specialized models that have been trained with hundreds of real doctor-patient interactions. With a curated version of the Corti AI model, which is based on over 100 million patient contacts, we create a medical language model that generates exclusively medical content. In addition, we have implemented guidelines: results are checked and unreliable information is consistently filtered out.
We are proud of the result: maximum precision with maximum relevance. While a standard model takes millions of irrelevant parameters into account – thereby increasing the risk of errors – our approach focuses exclusively on what matters in medicine.

Conclusion: Relief only with precision and a deep understanding of the subject matter
The way forward, therefore, lies not in generic artificial intelligence, but in a platform concept with healthcare-specific LLMs that specifically relieve doctors without undermining their responsibilities. Voicepoint Xenon® shows how this balancing act can be achieved: a healthcare-specific LLM, 20 years of experience in the Swiss healthcare system, a deep understanding of the needs of medical professionals, and, last but not least, a strong focus on data protection and information security.
The key is to consider the entire documentation process: Voicepoint Xenon® addresses not just a single documentation problem, but the entire spectrum of medical documentation. This provides real relief – while maintaining the foundation of precision, trust, and responsibility.
