Arsiwala-Scheppach LT, Chaurasia A, Müller A Machine learning in dentistry: a scoping review. J Clin Med. 2023; 12 https://doi.org/10.3390/jcm12030937
Schwendicke F, Golla T, Dreher M, Krois J. Convolutional neural networks for dental image diagnostics: A scoping review. J Dent. 2019; 91 https://doi.org/10.1016/j.jdent.2019.103226
Schwendicke F, Chaurasia A, Arsiwala L Deep learning for cephalometric landmark detection: systematic review and meta-analysis. Clin Oral Investig. 2021; 25:4299-4309 https://doi.org/10.1007/s00784-021-03990-w
Cantu AG, Gehrung S, Krois J Detecting caries lesions of different radiographic extension on bitewings using deep learning. J Dent. 2020; 100 https://doi.org/10.1016/j.jdent.2020.103425
Askar H, Krois J, Rohrer C Detecting white spot lesions on dental photography using deep learning: a pilot study. J Dent. 2021; 107 https://doi.org/10.1016/j.jdent.2021.103615
Google's neural machine translation system: bridging the gap between human and machine translation. 2016. https://arxiv.org/pdf/1609.08144.pdf (accessed March 2023)
Shickel B, Tighe PJ, Bihorac A, Rashidi P. Deep EHR: a survey of recent advances in deep learning techniques for electronic health record (EHR) analysis. IEEE J Biomed Health Inform. 2018; 22:1589-1604 https://doi.org/10.1109/JBHI.2017.2767063
Hornik K. Approximation capabilities of multilayer feedforward networks. Neural Networks. 1991; 4:251-257
Steyerberg EW, Uno H, Ioannidis JPA, van Calster B Poor performance of clinical prediction models: the harm of commonly applied methods. J Clin Epidemiol. 2018; 98:133-143 https://doi.org/10.1016/j.jclinepi.2017.11.013
Nagendran M, Chen Y, Lovejoy CA Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ. 2020; 368 https://doi.org/10.1136/bmj.m689
Schwendicke F, Singh T, Lee JH IADR e-oral health network and the ITU WHO focus group AI for Health. Artificial intelligence in dental research: checklist for authors, reviewers, readers. J Dent. 2021; 107 https://doi.org/10.1016/j.jdent.2021.103610
Artificial intelligence: what it is and what it can do for dentists Falk Schwendicke Lubaina T Arsiwala-Scheppach Joachim Krois Dental Update 2024 50:4, 707-709.
Authors
FalkSchwendicke
Professor and Head; Department of Oral Diagnostics, Digital Dentistry and Health Services Research, Charité – Universitätsmedizin Berlin, Germany
ITU/WHO Focus group AI4Health; Department of Oral Diagnostics, Digital Dentistry and Health Services Research, Charité – Universitätsmedizin Berlin, Germany
Artificial intelligence (AI) is an increasingly relevant topic for dental clinicians, with AI applications entering the clinical arena at a high pace. This article outlines what AI is, how it works, what its application fields are, but also what challenges the profession faces now and in the future. Computer vision, language processing, simulation and precision dentistry are the main fields where AI is, or will be, applied in dentistry. The ability to be generalizable to external data sources, be accurate, useful and easy to explain are the main cornerstones of AI for health applications. Clinicians should be able to appraise AI applications before integrating them in their daily workflow. AI will be useful for synthesizing an increasing amount of data in dentistry, allowing more automated, efficient and precise care. Certain tools will also facilitate patient communication and documentation. Dentists should critically evaluate AI against certain quality criteria and standards.
CPD/Clinical Relevance: It is important to be aware of the applications of artificial intelligence in dentistry.
Article
Artificial intelligence (AI) has become a reality – from autonomous driving to face recognition. But what exactly is AI? How do AI applications work? What opportunities, but also what challenges come with the use of AI? In this article, we explain the technological background, showcase applications available to dental professionals today, and describe where AI can be useful, but also which aspects of AI we should critically appraise and further improve.
The term AI was coined in the mid-1950s, although the definition has evolved over time. The English Oxford Living Dictionary defines AI as ‘The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages’ (https://en.oxforddictionaries.com/definition/artificial_intelligence). AI encompasses a range of applications, for example computer vision, natural language processing, robotics, virtual reality and simulation systems, and decision support, as detailed below:
The analysis of medical images using computer vision (eg X-rays or clinical photos) has the potential to improve diagnostic accuracy and patient communication, and save time;
Natural language processing (NLP) allows speech-based documentation, as well as the meaningful systematization and linkage of structured and unstructured text. It also facilitates language-based interaction of humans with machines;
Surgery, and also nursing or other robots are already available and used in practice;
Simulations are regularly employed in pharmaceutical research or machine industry;
Decision support by leveraging and synthesizing a wealth of data.
The optimism, and also the recent achievements, in the field have been facilitated by three main factors, namely: hardware; software; and data (Table 1). Notably, interest in AI technologies and the belief in its transformative nature have evolved in cycles that are referred to as ‘AI winters’ – periods where AI technologies had first been hyped, and then the hype was replaced by disappointment and disillusionment (Figure 1).
Table 1. Factors driving the success of artificial intelligence.
Factor
Explanation
Hardware
The further development of specialized computer chips, and especially graphics cards, has made computationally intensive applications, such as computer vision possible. The idea of letting machines learn to ‘see’ and thereby analyse images is already more than 50 years old. However, for decades there was not enough computing capacity available for this vision to materialize
Software
Software and algorithms related to developing AI applications are nowadays accessible not to the few (eg university, industry), but to everybody – open access is standard for supporting software suits, and also the resulting AI algorithms are usually freely available. This allows a democratization of AI and the rapid spread of applications
Data
The digitalization of the world generates an exploding amount of data. Data is considered the ‘new oil’. In medicine, this now pertains to clinical, history, imagery, but increasingly also ‘omics’ data, eg microbiomics (the analysis of the human microbiome, including in the oral cavity), genomics (the analysis of the human genome) or proteomics (the analysis of proteins involved in metabolic processes or diseases)
In dentistry too, AI has been increasingly researched, as documented by an exploding number of publications (Figure 2) and, in parallel, but with the expected time lag for development and regulatory efforts, a growing number of AI applications in dentistry has also been seen.1
Machine learning: what is behind most AI
A major element behind many AI applications is machine learning (ML). In ML, it is not the human who defines the rules that machines follow to fulfil certain tasks; instead, the machine itself learns rules from the data provided to it. The most prominent paradigm in ML, referred to as supervised learning, works as follows:
Certain data objects, for example pictures, are assigned a certain information (label, also known as annotation) by a human, the so-called annotator. For example, in the case of imagery, image information would consist of ‘this radiograph contains a caries lesion’, or more specifically, ‘these pixels on this radiograph contain a carious lesion’. Unlike the labeling of photos from the everyday world where, for example, buses or traffic lights are to be identified, medical labeling requires expert knowledge, is difficult to scale and is also more expensive. A single human can detect a bus with a high certainty, but the same cannot be said for detecting pathologies on a medical radiograph, and several – expensive – experts are often needed, thus driving up costs.
From data (for example, thousands of pictures with buses, cats or carious lesions) and information (for example, picture information), machines learn the statistical patterns in the data in an iterative process. During this process, the machines learn, step by step, from their own mistakes. The algorithm first generates a possibly random piece of information for a given data object. Then, the generated result is compared with the true information, and the machine learns whether it was a match or not. From the match or mismatch result, and via numerous repetition steps (‘epochs’), the algorithm is optimized iteratively to minimize the error rate.
Finally, the algorithm learns a mapping from an input (eg dental radiograph) to an output (eg carious lesion is present on the image). An algorithm trained in this way should be able to perform the same task on previously unseen data sets.
Computer vision
The logic behind ML was explained above using an example of image analysis. However, the specific field of using ML methods for image analysis is termed ‘computer vision’. From the dental literature on AI, we have observed that there has been an emphasis on the task of image analysis and related models (Figure 3).
How do machines see? In pictures, humans see colours, shapes, patterns and structures and are able to infer objects and a whole array of information. Machines, however, assess images differently. First, the image properties are extracted from the input image via filters that are configured to detect certain image features (edges, curves, colours, textures, etc). The configuration of the filters is stepwise adjusted by the algorithm during training. Each filter, upon scanning the entire image, creates its own representation of the original image. Through using hundreds and thousands of such filters, a complex data structure is created, which is a numerical representation of the original image. This data object can no longer be interpreted by humans, but can be analysed by ML, specifically by artificial neural networks. These are layers of single mathematical operations referred to as artificial neurones. For image data, a specific subtype of network is used, a convolutional neural network (CNNs).
Computer vision has a range of useful applications in dentistry. Radiographic analysis is one of the main AI applications and worldwide, several groups have developed AI software for supporting dentists in the diagnosis of lesions on panoramic, peri-apical, bitewing and cephalometric images.2 For these images, the software supports not only the detection and classification of pathologies such as apical lesions, periodontal bone loss, caries, but also physiological structures and artificial materials, such as fillings, crowns, and implants. On cephalometric images, AI systems are widely employed for the detection of landmarks, with a consistent accuracy similar to that of orthodontists.3
There are three main advantages in the use of AI here:
The detection of anatomical and artificial structures can assist the practitioner. A ‘preliminary report’ can be generated by the AI system that is comprehensive and systematic, requiring only checking and minimal correction by the practitioner, saving time and enhancing the quality of the documentation.
For pathology detection, most developed models are as good as, and sometimes even better than, trained dentists. AI models (Figure 4) used to detect early caries lesions on bitewing radiographs were significantly more accurate than dentists.4
Lastly, the use of AI applications can enhance patient communication. For example, certain AI solutions can generate an augmented radiograph with any pathologies highlighted in colour (Figure 4), which can help patients to gain a deeper understanding of the diagnosis. A prerequisite for this, however, is that the AI displays the findings in a communicable way and not as boxes or classification output (eg ‘there is a caries lesion on this image’).
On photographic imagery, models have been constructed to detect teeth, restorations, caries lesions, mucosal and skin lesions, and to assess facial profiles.2 For example, we used ML to detect white spot lesions on photographs and discriminate fluorosis from other white-coloured entities.5 Such models may facilitate more accurate diagnosis and treatments, and could also lead to patient-centred applications (eg mobile phone applications for second opinions, or symptom checkers).
Machines that talk, simulate or augment
There are many further applications, but few have yet entered clinical care.
Natural language processing
Natural language processing (NLP) allows machines to understand meaning in text and speech and generate meaningful language on their own.6 A wide range of new NLP models has been developed recently,7 that consistently outperform past approaches.8 NLP has been suggested as a way to make use of the often unstructured electronic health records and also to facilitate voice transcriptions to automated manual reporting.9 The latest example, Chat GPT, has caused significant excitement.
Simulation
Simulation is an AI application visible in everyday life. Autonomous driving, for instance, builds heavily on simulations. However, a major obstacle in the field is to test drive on all physical roads worldwide, a total of 8.8 billion miles. For comparison, Waymo, Google's self-driving car project, covered 20 million miles. The only strategy to overcome this impossible task is simulation, which allows quick explorations of economic designs of the technologies required (eg batteries, light detection and ranging systems). So far, the use of simulation in dentistry is in its infancy. One relevant area for dentistry is drug development, which is usually costly. In silico simulation, that is experiments performed via a computer simulation, is increasingly an alternative pathway to develop and validate drug candidates.10 Here, large databases are assessed using AI to recognize molecular features and predict effects of drugs.10 Another area, with at least some pilot applications in dentistry, is ambient experience, that is intertwining the physical and the digital world via augmented or virtual reality.11 Applications here mainly focus on education, for example training motor skills, or for anatomical education and surgery.
P4 dentistry
Faster and more efficient processes, relief for doctors, better patient communication, more accurate detection and ultimately better diagnostic and therapeutic quality – the expectations on how AI will improve healthcare are immense. On the horizon appears the vision of using exploding data – from patient history and claims data, through clinical and image data, to patient-generated data – for better understanding of an individual patient's health conditions.12 This would allow for a more personalized and precise approach towards diagnosis and treatment. It would also open up new avenues for maintaining health (prevention) and for patients to actively participate in healthcare (as data donors or recipients, for example). This P4 dentistry is only starting to evolve, but is hampered by data not being available, standardized or linkable (Figure 5). Overcoming these challenges will speed up this path and enable more efficient, safer and better healthcare based on data, something we coined as ‘data dentistry’.12
Challenges
The young field of AI research in medicine has reached a stage where proof-of-principle applications are available for a wide range of use cases. Many groups worldwide have acquired data and trained ML models for specific medical questions, but so far, few have gone beyond this stage. It is becoming increasingly apparent that the translation of research to application is anything but simple in practice, as it is lengthy, complex and expensive. Moreover, a major limitation is that many AI applications are only limitedly generalizable (ie work on new data from a new setting), fair (ie work on all population groups), and robust (ie are not prone to adversarial attacks).1,13
Also, many AI applications have only been validated to a limited extent. Validation on external data, or in prospective studies, has often not taken place. Showing that an AI model is well trained and has high accuracy on a pre-selected dataset is a first step, but these results must be audited carefully, as in principle, AI applications can even memorize complex and large datasets by heart.14,15 The same is true when AI models are validated prospectively, especially in randomized controlled trials. The way practitioners interact with AI systems in such settings partially determines the usefulness of AI applications, as practitioners may agree or disagree with the AI results.16 The claimed benefits, for example time saving, better communication, safer and more efficacious treatments, need to be demonstrated in prospective set-ups. In a recent randomized trial, we demonstrated the prospective value of the dentalXrai Pro software for caries detection.17 By randomly having dentists use the software or not, we could show that the AI enabled the dentists to detect early lesions with higher sensitivity, facilitating early intervention. Only based on such data should we claim the usefulness of AI.
Last, the process behind the final decisions of most AI is hard to interpret owing the intrinsically complex structure of any AI model. Many AI applications are ‘black boxes’, which do not allow users to understand which criteria the AI used to arrive at a certain result. Explainable AI systems, whereby the machine's logic can be compared with human logic and thus be verified, could increase the trust in AI-based applications.
Explainability, generalizability, fairness and robustness are at the core of numerous interdisciplinary initiatives that are attempting to raise the standards, and thereby the value, of AI for healthcare applications. For example, the International Telecommunication Union (ITU), together with the World Health Organization (WHO), has set up a focus group to define the standards for AI applications in medicine. The authors of the present article are responsible for the topic ‘Dental Diagnostics and Digital Dentistry’ within that international and interdisciplinary consortium. Intensive work is also being carried out elsewhere on quality guidelines for AI research in medicine and dentistry.18 Standardization bodies are increasingly recognizing the value of AI, but also the need for standards and guidance in the field. The dental community is called to action to engage in such activities.
Conclusions
Currently, only a few dental AI applications are available, while many more are expected to come in the next years. The performance of these applications is difficult to assess owing to the lack of relevant studies in many cases, as well as the limited comparability of methods and results in the available studies. As the potential benefits of AI for dentistry encompass higher diagnostic and therapeutic quality, more efficient care, more effective communication, and generally, safer and more personalized dentistry, these limitations need to be overcome. AI systems, like all diagnostic and therapeutic methods in medicine, must adhere to the principles of evidence-based care. Robust, generalizable and transparent AI systems will have a profound positive impact in healthcare.