Every year, millions of people around the world undergo detailed and time-consuming examinations to assess the health and functionality of their eyes. But what if AI could rapidly identify multiple medical conditions that extend beyond the eye from a single retinal image? Topcon, which manufactures the optical equipment used by most of the world’s eye specialists, has now made that a reality.
Topcon’s Harmony platform runs on Microsoft Azure and enables AI models to evaluate retinal images “for diabetic retinopathy and potentially several other conditions, including cardiometabolic, neurovascular and ophthalmologic disorders,” says Dr. David Rhew, global chief medical officer and vice president of healthcare at Microsoft.
Topcon’s solution makes the diagnostic process “fast, fully automated and inexpensive”, says Rhew, adding that the AI models for detecting diabetic retinopathy have been approved by the US Food and Drug Administration. “Several sites across the USA have deployed this approach and it has the potential to be most impactful for individuals living in rural and underserved communities.”
Topcon’s Harmony solutions uses Microsoft Azure and AI technologies to review retinal scans taken during routine eye tests and detect conditions such as diabetic retinopathy
This is just one example of how human healthcare specialists are augmenting their work with AI and cloud technology. From disease screening to analysing test results, developing new precision medicines, automating administrative tasks and personalising patient experiences – these technologies provide insights into vast volumes of multimodal data, allowing healthcare organisations to transform the way they deliver care to the public.
“During the pandemic, we observed that one of the most successful approaches to delivering care across large populations involved connecting public health with community-based organisations and healthcare,” says Rhew. “Public-private partnership was essential, but success was also dependent on applying a nimble technology infrastructure that leveraged cloud, data interoperability, and AI.
“Cloud enabled a rapid scale-up and scale-down platform that was secure, and both time and cost efficient. Data interoperability, specifically around HL7 FHIR [Health Level 7’s Fast Healthcare Interoperability Resources] standards, enabled data communication between healthcare systems and public health organisations. AI helped public health agencies to efficiently manage data sets that were often incomplete, inaccurate, or required manual data entry into computer systems.”
Capitalising on the potential of AI
Microsoft has developed the Microsoft Cloud for Healthcare to bring together cloud and AI capabilities from its Azure, Dynamics 365, Power Platform, and Microsoft 365 solutions to provide healthcare organisations with the industry-specific workflows, data insights and tools they need to enhance care delivery. Organisations can use the platform – combined with healthcare-specific solutions from Microsoft’s partners – to rapidly build and deploy solutions that boost operational efficiency, improve patient outcomes, reduce clinician burnout, drive research and innovation, and more.
“Microsoft Cloud for Healthcare provides a comprehensive technology stack that clients and partners can leverage to enable more efficient healthcare delivery,” says Rhew. “Data security is at the core of the Microsoft technology stack. Microsoft provides the technology infrastructure and tools that decrease the risk and impact of cyberattacks. Building on top of that, we provide high-performance storage and compute capabilities to help organisations manage large data sets and multimodal AI models. Further up the technology stack, we enable efficient data management by leveraging interoperability standards such as HL7 FHIR and offering secure environments that promote privacy-preserving data collaboration.”
In October 2024, Microsoft updated Microsoft Cloud for Healthcare with new AI tools, including healthcare application templates for Microsoft Purview and solutions to make data more accessible in Microsoft Fabric. There are also new healthcare multimodal healthcare models in Azure AI Studio and GitHub, and organisations can use healthcare agent services in Microsoft Copilot Studio to build their own secure copilots with healthcare-specific features. They can be used for applications such as scheduling appointments and allocating treatments.
“AI is dependent on data sets – and the more sources of data that are available, the more robust the AI models become,” says Rhew. “Microsoft Fabric enables organisations to ingest, transform and manage multimodal data sets, while Microsoft Purview enables streamlined governance of the data.
“Microsoft facilitates the deployment of AI models by connecting multimodal data sets to AI development environments such as Azure AI Studio and GitHub. Healthcare agent services integrated in Microsoft Copilot Studio can help enable the creation of agentic AI.”
A helping hand
One key benefit of AI technology for healthcare is its ability to automate routine tasks, which boosts operational efficiency and productivity, while reducing the administrative burden on workers and freeing them up to focus on patients.
For example, AI can be used to risk-stratify patients. “A major challenge in healthcare is the shortage of clinicians, which contributes to increasing wait times and both patient and clinician dissatisfaction,” says Rhew. “The wait time problem is further compounded by inefficient triage methods. In a busy emergency department AI applied to radiology images can help identify high-risk individuals requiring urgent attention. The technology can also identify low-risk patients who could potentially be managed through an alternative pathway. AI-enabled risk-stratification followed by triage and capacity building help make care more efficient, and in the process, improve outcomes.”
“People need to feel confident that AI will not cause harm to individuals and society,” says Microsoft’s David Rhew
In addition, AI can be used to rapidly analyse clinical data and help clinicians and nurses write reports and other documents.
“Public healthcare can be highly fragmented, administrative, and dependent on large data sets,” says Rhew. “The process of acquiring, curating and analysing data, and reporting results is time consuming and inefficient, but generative AI can help in many ways. First, administrative tasks such as data collection, report writing and presentation of results can be performed by generative AI. The technology can also write programming code that can help automate tasks.”
Generative AI can also power virtual assistants with natural language capabilities to allow both patients and healthcare professionals to quickly access the information or service they need.
“Chatbots powered by generative AI may be used to help individuals answer questions and sort through complex problems,” says Rhew. “AI-based chatbots can communicate with individuals in their own language and at their grade level, which enables this technology to be used by anyone, anywhere.”
Microsoft’s Dragon Ambient eXperience (DAX) Copilot, for example, is helping hundreds of healthcare organisations to streamline administrative tasks and document patient visits directly in electronic health records. DAX Copilot combines Dragon Medical’s natural language voice dictation capabilities – used by more than 600,000 clinicians worldwide – with ambient and generative AI to automatically convert multiparty conversations into speciality-specific standardised draft clinical summaries that integrate with existing workflows.
In 2024, 77 per cent of 879 clinicians surveyed by Microsoft said using DAX Copilot saves them an average of five minutes per patient encounter, while 70 per cent credited it for reducing burnout and fatigue, and 62 per cent stated that they are now less likely to leave their organisation. DAX Copilot is positively impacting the patient experience too. When Microsoft surveyed more than 400 patients whose clinicians use the tool, it discovered that 93 per cent reported a better overall care experience.
Microsoft integrated DAX Copilot’s ambient listening capabilities with Dragon Medical’s natural language voice dictation capabilities into a new Dragon Copilot in March 2025. The AI assistant allows clinicians working across ambulatory, inpatient, emergency departments and other care settings to streamline documentation, automate key tasks, and conduct general-purpose medical information searches from trusted content sources.
Healthcare providers around the world are using Microsoft OpenAI Service to build generative AI assistants too. In Australia, tertiary referral and teaching hospital Liverpool Hospital has created an assistant that enables cardiologists to query vast volumes of literature to answer clinical questions. US-based healthcare system Mercy has built chatbots to allow workers to find policy and procedural information, as well as to complete tasks such as scheduling appointments when taking patient calls. Meanwhile, Chi Mei Medical Center has used AI assistants to cut the time it takes for doctors to write medical reports from 60 to 15 minutes, enabled nurses to document bed transfers in fewer than five minutes rather than 20, and empowered pharmacists to serve double the number of patients per day. Other AI assistants are helping staff to identify patients at risk of falls, supporting nutritionists to produce diet recommendations and creating personalised educational materials for patients with comorbidities.
Rhew expects more healthcare applications to emerge as generative AI technology continues to evolve. “The generative AI outputs demonstrate increasingly higher levels of performance as a result of improvements in the AI model technology and in the application of more sophisticated safeguards,” he says.
Read more from Rhew: the importance of data in the healthcare industry
Overcoming the barriers
While many patients and healthcare professionals acknowledge the potential of AI-powered solutions to significantly improve the healthcare experience, they also cite concerns regarding data privacy, risks of the technology causing unexpected harm, and more.
Forrester’s Generative AI Impact On Clinicians: Bringing The Fever Down study highlights that 40 per cent of physicians claim AI is overhyped and will not meet expectations. An October 2024 survey by The Alan Turing Institute found that while 52 per cent of UK doctors are optimistic about AI’s potential in healthcare, nearly one-third do not fully understand the risks and almost 70 per cent have not received adequate training on their responsibilities when using these systems.
Patients are similarly sceptical. PwC’s Healthcare Survey 2024 suggests that only one in five is willing to use AI tools for routine tasks, such as booking appointments or refilling prescriptions. The enthusiasm for the remaining 80 per cent is “tempered by worries about data privacy and the quality of care”. Likewise, the ‘Patients’ Trust in Health Systems to Use Artificial Intelligence’ research paper published in the Journal of the American Medical Association Network Open in 2025 indicates that almost 66 per cent of US adults have low trust in healthcare systems to use AI responsibly and 57.7 per cent are doubtful they would be protected against AI-related harm.
“There are two major reasons individuals and organisations may not trust AI: concerns it may cause harm and fear of the technology taking their jobs,” says Rhew.
To overcome concerns about AI replacing humans, the healthcare sector must address the widening AI skills gap, advises Rhew. “We have to develop AI skilling and reskilling programmes that allow individuals to secure a job in a world that is becoming increasingly AI-enabled,” he says. “The first step is to improve AI literacy in the workforce. The second is to train individuals on how to use AI, starting with how to do prompt engineering. And the third is to define new AI-enabled job requirements and develop career programmes that allow people to secure these roles.”
Building trust in AI is also essential to drive the widespread adoption of the technology. “People need to feel confident that AI will not cause harm to individuals and society,” says Rhew. “Operationalising responsible AI principles, transparency of goals and processes, and continued multi-stakeholder dialogue will help with this.”
In 2024, Microsoft joined forces with 16 healthcare providers and two community health organisations to form the Trustworthy & Responsible AI Network (TRAIN) to make high-quality, safe and trustworthy AI tools equally accessible to every healthcare organisation.
“TRAIN is a healthcare system-led consortium whose primary aim is to operationalise responsible AI principles in a time-, resource- and cost-efficient manner,” says Rhew. “Members may apply three approaches to accomplish this goal: one, leveraging technologies that promote and enable responsible AI use; two, redistributing workloads involved with testing; and three, monitoring AI models through standardised collaborations with other TRAIN members. They can also partner with other members and AI developers to share the cost, time and resources involved with testing and monitoring AI models. Today, approximately 50 health systems in the USA and several more in Europe are members of TRAIN.
“When it comes to AI’s tremendous capabilities, there is no doubt the technology has the potential to transform healthcare. However, the processes for implementing the technology responsibly are just as vital. By working together, TRAIN members aim to establish best practices for operationalising responsible AI, helping improve patient outcomes and safety while fostering trust in healthcare AI.”
Discover more insights like this in the Spring 2025 issue of Technology Record. Don’t miss out – subscribe for free today and get future issues delivered straight to your inbox.