Leapfrogging AI adoption in nuclear medicine
Learnings from radiology
February 23, 2026
- By Maarten Larmuseau, CEO Nuclivision - Walk into RSNA, radiology's largest global conference, and you'll see the future of medical imaging taking shape right before your eyes. In December 2025, the technical exhibit floor was filled to the brim with companies from Microsoft, Siemens, and GE Healthcare down to five-person startups, all converging on the same two letters: “AI”. Finding a booth without "AI" in its marketing materials was nearly impossible, except for the occasional pure hardware vendor proudly staking their reputation on decades of craftsmanship. This AI omnipresence isn't just hype. Over 150 FDA-approved standalone medical imaging AI applications now exist, according to the Health AI Register [1]. Yet the distribution reveals a strong divide: 70 applications for CT, 50 for MRI, seven for PET, and just one for SPECT. While AI has saturated radiology, nuclear medicine remains largely unserved.
The imbalance makes sense. CT and MRI are high-volume modalities, often needed in acute settings where overworked radiologists make critical decisions under time pressure. Nuclear medicine operates at lower volumes with more time per scan. But this landscape is shifting rapidly. New tracers for neurodegenerative diseases and the expansion of radionuclide therapies are driving unprecedented growth. PET scans may double by 2030, raising an urgent and inconvenient question: how will departments handle this increase?
The answer lies in those two omnipresent letters. And though Nuclear Medicine is eager to adopt, AI's path in radiology hasn't been smooth and lessons can be learned. In radiology, the first commercial AI programs were available back in 2017, such that it took almost nine years to reach large-scale clinical adoption. Given current market trends in nuclear medicine, juggling staff shortage with high operational costs, it is clear that nuclear medicine will not have this much time and adoption should happen faster.
In this piece, I examine what nuclear medicine can learn from radiology's experience to accelerate AI adoption and prepare for the rise in scans that is undeniably coming.
Lesson 1: The AI wave is led by AI-native companies
A first observation is that the capabilities of deep learning have given rise to a wave of new, “AI-native” companies in the field of diagnostic imaging. Successful companies like Aidoc and Gleamer built their first products on scientific advances in deep learning several years before the established players did. This was in the pre-Large Language Model era, when AI and deep learning were still considered two distinct concepts. These AI-native companies used a consistent market entry strategy: developing a single, focused application to gain early traction, then expanding their product portfolio to cover more indications. Aidoc started with intracranial hemorrhage detection, while Gleamer focused on bone fracture identification. The value proposition of these initial products was to enable faster detection, in order to improve quality of care for patients requiring urgent help. In such a setting, the performance of the algorithm can be quantified using metrics such as sensitivity or positive predictive value (PPV)[2].
However, algorithmic performance is only one ingredient for clinical adoption. First, these products need a compelling business case. The targeted indication must have sufficient volume, and clinicians must acknowledge the current pain points and need for the solution. Second, regulatory clearance is essential to demonstrate safety and performance. While reimbursement can be pursued later, the lengthy approval trajectories led AI-native companies to focus on use cases with strong business cases independent of reimbursement. Third, seamless integration is critical, solutions requiring manual uploading or processing are too time-consuming and won't be adopted. It remains an open question whether Nuclear Medicine will have its own set of AI-native companies, or whether these will be the same companies serving the broader radiology market.
Lesson 2: Integration is key
From a workflow perspective, the integration requirement is a hurdle AI-native companies have to overcome. Their solutions may have to bridge multiple systems: the PACS (Picture Archiving and Communication System, where medical scans are stored), the RIS (Radiology Information System, which manages the clinical workflows), and the EPR (electronic patient record), and, mostly in nuclear medicine, an independent diagnostic viewer. This multi-system integration is non-negotiable as radiologists won't adopt tools that disrupt their established workflows or require manual data transfers between systems. The need for integration has sparked a response from established infrastructure players. Several PACS are offering an AI-platform to their users, and some have even begun developing their own AI applications, partly to address customer demand and partly to maintain their position in the value chain. The need for comprehensive integrations, has given rise to other players/vendors that have launched AI marketplaces that enable third-party providers to offer their algorithms to end users. This approach acknowledges a fundamental reality: no single vendor can develop best-in-class AI for every clinical indication. For hospitals, managing multiple AI vendors, each with their own billing, support, and update cycles, creates considerable administrative overhead. This fragmentation has become particularly evident as radiology departments attempt to scale from one or two dedicated AI tools to a more global AI coverage across their imaging operations. Several dedicated AI marketplace players such as DeepC, Incepto, and Carpl hope to fill this gap. In nuclear medicine, this integration will also be key and the established workflows and integrations may not even suffice. For instance, most tools in radiology rely heavily on integration with the PACS as this holds all the scan data and radiologists work directly on the images there. Many Nuclear Medicine departments do not work on the PACS, but on a smaller, dedicated workstations that only store scans temporarily. Hence, the inroads that proved to be successful in radiology may not lead to the same success in Nuclear Medicine. Companies will have to understand the subtle changes in workflows in order to accelerate adoption by their users.
Lesson 3: Foundation models as the enabling technology
The advent of foundation models has changed the name of the game for AI-native companies. Foundation models act as the "foundation" for many different applications or downstream tasks. Technically, the foundation model converts an input image to a compact, internal representation that is easier for the machine to interpret. This approach delivers two critical advantages.
First, it allows training downstream tasks with far less data and annotations, which are time-consuming and expensive to collect. In many cases, such as new therapies or radiotracers, access to large datasets may be impossible altogether. Second, foundation models often result in better performance and superior generalization. In HealthTech, this generalizability is crucial, ensuring models work across different scanner technologies and acquisition protocols.
Many, if not all, major AI-native companies are now focusing on foundation models, as they enable tackling multiple indications with the same backbone model while requiring less annotated data per task. From a strategic angle, foundation models shift the power balance between AI marketplaces and AI providers. Essentially, foundation models break down models in two distinct phases. In the first phase, large amounts of unlabeled data are used to train a backbone model that can power various downstream applications. In a second phase, smaller sets of labeled datasets are used to train downstream models for specific tasks. Although the first phase requires lots of data and compute, the fact that the data does not have to be labeled, enables faster collection at a lower price. Once the foundation model has been trained, it becomes easier for AI providers to develop their own suite of tools. As a consequence, these parties may act as the primary or even sole solution provider, such that consolidation in this space is bound to happen. Indeed, company booths at RSNA increasingly resemble menu cards, where users can select which indications they want covered by AI support.
This consolidation trend suggests that technological capabilities are no longer the main differentiator as regulatory strategy and time-to-market have become equally important. Another implication is that the value of high-quality annotated data increases, as the performance of the tasks-specific models fundamentally depends on the quality of these annotations. As less data is required to train downstream models, the focus shifts from quantity to quality, ensuring smaller training sets still capture the necessary variability to build robust models. Such robustness will be mandatory in nuclear medicine, where scan qualities are not only affected by scanner and acquisition technologies, but also by differences in tracer dosing and uptake schemes. This results in scans that may be even more heterogeneous between different centers, such that models trained in centers 1 and 2, may not perform as planned on a third, independent center.
Lesson 4: The importance of AI governance
As AI is literally everywhere, but the technology behind it not yet understood by many, the main question is what convinces an end-user to collaborate with a particular AI provider. In the early days, when there was less trust in AI-based solutions, physicians had to be convinced by metrics that highlighted the performance of the AI model. Over time, as the prowess of AI-models has vastly increased, the concerns have shifted from ensuring that the model works, to ensuring that the model works under changing conditions. Large-scale adoption implies that models are used in many different clinical settings and even within a center the context may change as a new scanner or protocol is introduced. AI providers have to ensure that the models will maintain their performance under these altering contexts and are offering tools and dashboards to their customers to monitor performance and identify possible data drift. In a bid for more trust and a more bespoke approach, some companies, such as Mecha Health, take a different approach and offer the possibility to finetune their foundation models on center-specific data.
In Nuclear Medicine, the initial trust should be gained on large public benchmark datasets that source data from different centers and scanner technologies. Right now such datasets are largely missing, although initiatives such as AutoPET and UDPET merit an honorable mention. In addition, companies developing AI solutions for Nuclear Medicine should learn from radiology and not only focus on model performance but also on ways to identify data drift, improve model explainability and monitor performance in real-time.
Conclusion
Nuclear medicine stands at a critical juncture. With PET volumes projected to double by 2030, departments cannot rely on traditional workflows to manage the increasing demand. Radiology's experience with AI offers a clear roadmap: prioritize seamless integration with existing systems, leverage foundation models to overcome data scarcity, and make AI governance an intrinsic part of the product.
The key difference is timing. While radiology navigated AI adoption through trial and error over eight years, nuclear medicine has the advantage of learning from these lessons. The technology is proven, the integration challenges are known, and the business case is strengthening with each new tracer and therapy indication.
The question is no longer whether AI will transform nuclear medicine, but whether departments will proactively prepare for 2030 or reactively scramble when volumes overwhelm capacity. Those who act now by establishing vendor partnerships, piloting AI workflows, and building internal expertise, will be positioned to handle growth sustainably. Those who wait risk compromising quality, burning out staff, or limiting patient access to critical diagnostics and therapies.
References
[1] N. Antonissen, O. Tryfonos, I. B. Houben, C. Jacobs, M. de Rooij, and K. G. van Leeuwen, “Artificial intelligence in radiology: 173 commercially available products and their scientific evidence,” Eur. Radiol., vol. 36, no. 1, p. 526, Jan. 2025, doi: 10.1007/S00330-025-11830-8.
[2] D. M. W. Powers and Ailab, “Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation,” Oct. 2020, Accessed: Mar. 03, 2026. [Online]. Available: https://arxiv.org/abs/2010.16061v1

.png)