Noninterpretive Skills: Imaging Informatics Part 2
Part 2 of my review of Noninterpretive Skills: Imaging Informatics. Download the free study guide by clicking here. Prepare to succeed!
Show Notes/Study Guide:
Please refer to the most current version of the American Board of Radiology Noninterpretive Study Guide, available for download from the ABR, to assure accuracy of the information discussed. The 2024 version of the ABR NIS study guide is currently available at: https://www.theabr.org/wp-content/uploads/2024/01/2024-NIS-Study-Guide.pdf
What are key differences between de-identification and anonymization of medical images?
The key difference between de-identification and anonymization of medical images lies in the extent to which patient identity can be re-established:
De-identification: This process involves removing protected health information (PHI) from medical images and their metadata so that the patient's identity is not directly discernible. However, it is still possible for an approved entity to re-identify the patient using a key or other information. De-identification can be achieved through automated processes, including the removal of "burned-in" PHI (e.g., in ultrasound images) and ensuring images that reveal identifiable features, such as facial contours in CT or MRI scans, are appropriately managed.
Anonymization: This process goes a step further by removing all PHI and any other identifiable data from medical images, ensuring that the patient's identity cannot be re-established in the future. Anonymization ensures that the patient's identity remains permanently undiscoverable from the images and metadata.
While de-identification allows for the possibility of re-identifying the patient under certain conditions, anonymization guarantees that the patient's identity remains completely and irreversibly obscured.
True or false? CT or MRI images of the face are considered protected health information (PHI).
True. Because it is possible to reconstruct details of a patient’s face such as their contour, medical imaging like CT or MRI that includes the face is, in fact, PHI.
Is it easier for automated de-identification software to de-identify a medical progress note or an imaging report in most cases?
Because of the lower rate of PHI in reports of medical imaging studies, and less standard ways of reporting PHI, automated de-identification software often performs worse in a radiology report versus another medical report such as a progress note. De-identification of a radiology report often requires manual review of the report or highly specialized software.
What is a ransomware attack and what steps can radiology practices take to restore operations in the event of a ransomware attack, per the ABR NIS study guide?
A ransomware attack involves bad actors hacking into systems, encrypting files so they cannot be accessed without a key, and then demanding a ransom in exchange for the encryption key. Per the ABR study guide, standard downtime procedures likely don’t address the needs of a ransomware attack. To maintain business continuity a paper-based workflow may need to be instituted while compromised systems are isolated to prevent further damage and attempts are made to recover data. If a paper-based workflow is temporarily implemented this would be merged with pre-existing data after the attack is resolved and system data is recovered.
What are examples of image post-processing commonly used in medical imaging?
Common examples of imaging post-processing include 3D post-processing such as maximum intensity projections (MIPS), multiplanar reformats (MPRs), image segmentation, and image registration.
What is image segmentation?
Image segmentation involves extracting or specifying a specific region of interest on an image, or similarly identifying images of interest within a larger image stack, typically for purposes of further specialized analysis of the region(s) segmented. For example, image segmentation in cardiac imaging can include isolating each ventricle or atria for further analysis. This can be done manually or with the help of automated software.
What is image registration?
Image registration is essentially overlying or otherwise linking one image set onto another image set. For example, fused PET/CT images or SPECT/CT images are examples where we use image registration. The primary advantage of image registration is to allow direct comparison of two image sets more readily. As part of image registration, sometimes images to be registered may be deformed which, per the ABR NIS study guide, can be rigid (translation, scaling), affine (shearing), or elastic (local warping of an image to better align a target image with a reference image). Elastic deformation can be particularly helpful to adjust for factors like patient positioning, lung expansion, or other such variables when registering image sets.
True or false? All forms of 3D post-processing can be performed on typical PACS software.
False. Most modern versions of PACS can perform simple 3D post-processing tasks like MIPs, MPRs and volume rendering tasks. However, 3D post-processing techniques that require more sophistication, such as curved planar reformats (CPRs), cinematic rendering, and functional analysis may require specialized software and sometimes even a team of experts, such as may be found in an advanced 3D imaging lab.
Machine learning and deep learning are both aspects of artificial intelligence. What are basic definitions of machine learning and deep learning, as specified in the ABR NIS study guide?
To paraphrase, machine learning is a form of artificial intelligence that allows a computer to learn on its own without a specific program telling it what to do.
Deep learning utilizes layered neural networks with weighted connections for data analysis, and is particularly good at image analysis, as well as text analysis, which is highly pertinent for radiology. More specifically, deep learning models have neural networks with various layers: an input layer, multiple so-called hidden layers, and an output layer that makes predictions or other decisions.
What is the difference between supervised and unsupervised learning in artificial intelligence?
With supervised learning, a curated training data set is used to help an algorithm learn, and afterwards one tests how well the algorithm performs on a novel data set without overlap. For example, one could label all cancers on a mammography data set, use that set to train an algorithm, and then expose the algorithm to other mammograms it has not yet seen to see how well it performs.
With unsupervised learning, the training set is not labeled or categorized, but rather the algorithm is allowed to interpret or organize the data on its own. For example, giving an algorithm a set of unanalyzed mammograms with cancers unlabeled, and letting it try to figure out how to analyze the images on its own. Before use, unsupervised learning models need to be validated to make sure it performs as hoped. The ABR study guide states that unsupervised learning models may not perform as well as supervised models without additional training. In general, current methods for training AI models require radiologists to label images or text in advance which can be time-consuming for busy professionals. Sometimes images may be globally labeled by experts as normal or abnormal, or in other cases images may be segmented or otherwise labeled to highlight very specific features important for imaging interpretation, such as segmenting masses on a mammogram.
What is the pitfall in AI model training of “overfitting” the model?
When one “overfits” an AI model to the data, an AI algorithm works very well for a very specific data set or use but fails to perform as well on data that slightly differs. For example, an AI algorithm may learn to perform very well on images from the single institution it was trained on but may perform less well on images from other organizations that may differ slightly in acquisition technique or other factors.
What are some pitfalls of pre-processing images when training deep learning models per the ABR study guide?
Preprocessing images is typical for training deep learning models, which includes things like image denoising or downsampling which often lowers imaging resolution. As a result, subtle findings may be less apparent, and this may prove problematic for the deep learning algorithm unless corrected.
What is natural language processing?
Natural language processing is the analysis of human language data by artificial intelligence algorithms. This can be helpful for radiology applications by providing solutions to detect and report critical findings or appropriate follow-up recommendations, assessment of reporting compliance requirements, or rad-path correlation. A process called embedding is sometimes used in natural language processing where words and phrases from text are converted to numeric representations to aid input into deep learning models.
What is the risk posed by so-called “data drift” in AI models?
After being deployed, there is a risk that AI model performance may degrade or drift over time because of gradual changes in data processing, and this has been termed data drift. This is one potential pitfall in AI deployment in a clinical environment that needs to be safeguarded against.
What is “automation bias” in artificial intelligence use?
Automation bias is the risk that a user such as a radiologist will assume that the computer algorithm is always more correct than a human. For example, a radiologist who disagrees with an AI finding on an imaging exam yet proceeds with accepting the AI interpretation because they falsely assume it is more correct.
What is “statistical bias” in artificial intelligence use?
Statistical bias results when the conclusions drawn by an AI model do not represent actual population features. This includes sampling bias, wherein an AI algorithm is trained on a selected sample that does not represent the population for which the AI algorithm will ultimately be utilized.
What is “social bias” in artificial intelligence use?
Social bias results when an AI algorithm may perform better or worse for certain populations of people and may pose especially notable risks for underrepresented populations. One example provided in the ABR study guide is a model that predicts better health outcomes in a patient population that uses less healthcare resources, falsely assuming that is a sign that they are healthier while overlooking the fact that the decreased utilization of healthcare resources does not stem from improved health but rather a lack of access to care.