Physics for the deep learning computer vision expert

Written by judywawira | Published 2018/02/07
Tech Story Tags: machine-learning | physics | radiology | deep-learning | data-science

TLDRvia the TL;DR App

Written by Judy Gichoya and Alexandre Cadrin-Chênevert

Jeremy Howard recenty directed a post to us from one of the fast.ai student asking about housefield units conversion for MRI.

Question

This post summarizes an overview of the physics of image formation for MRI and CT scans to help understand the concept of intensities and mapping

For the impatient reader, Alexandre Cadrin-Chênevert answered this question

If this of interest then read on….

What is a modality ?

Radiology relies on various types of cameras (or modalities) — that work differently to capture images from patients. These include ultrasound , Computed Tomography (CT scan), Magnetic Resonance Imaging (MRI), PET/CT , and plain Xray. Yes, think of the various modalities related to the different camera types for medical imaging. In the picture below, you can see the appearance of the liver across different cameras (modalities)

Liver images aquired through ultrasound

Images of the liver / upper abdomen acquired from an MRI

CT scan images of the upper abdomen including liver

How are MRI images formed?

MRI physics can be summarized as signal generation , image formation and sequences.

A: Signal Generation

Our bodies are made largely of water (72% of our bodies are made up of water). Water is composed of 2 Hydrogen molecules and 1 oxygen molecule (H20). This translates to a large amount of hydrogen molecules in our bodies. Hydrogen is a chemical element with an atomic number 1 (H+).

Water composition in the human body

The MRI machine is a big strong magnet (and the unit Tesla is used to signify the strength of the main magnet). When you are sitting at the core of the MRI machine for example for a Head MRI , then most of the hydrogen atoms in your body align to the direction of the main magnet (b0). This is called longitudinal magnetizaton.

At rest, all your hydrogen atoms are spinning on their axis — Watch the video tagged below to learn about precession

Since we know the main magnet strength (B0), then we can calculate Lamor frequency which can be used to generate signal to the hydrogen atoms that in precession.

lamor frequency

In the figure above, the H+ are at a precession frequency of 64 MHz evenly distributed across the magnetic field. The knowledge of the precession frequencies allows us to focus on a section of the body eg abdomen by shifting the magnetic field so we can apply a signal to a specific group of atoms rotating at a predetermined frequency.

In addition to the main magnetic field, the MR machine has a radiofrequency (RF) signal that is applied perpedicular (90 degrees) to the main magnet and flips the hydrogen atoms to 90 degrees relative to the main magnet. I like to think of this as the process of tuning in to the radio where you are always looking for a specific frequency. This is called transverse magnetization. The hydrogen atoms continue their precession, but also lose energy to go to a lower energy state (in the direction of the main magnet).

The above steps are summarized below ….

Recap of signal generation

As the hydrogen atoms lose energy to align themseves to the direction of the main magnet (longitudinal relaxation), the signal they generate can be captured and plotted as a curve shown below. This curve is used to determine the T1 time

TT1 time — which is the time require for protons to recover 63% of their longitudinal magnetization.

T1 — time required for protons to recover 63% of their longitudinal magnetization

Different body tissues will have different T1 times , and hence when you look at an MRI , you can identify fat (short T1) versus CSF.

Difference in T1 curves across various tissues

Another phenomenon that is occuring after the 90 degrees RF is summarized below. The hydrogen atoms are in phase at the beginning of the 90 degree RF pulse, but they undergo a spin -lattice interaction and get out of phase. If you plot this curve of free induction decay, then you get a T2* curve.

T2 * curve

TT2 time— is the time required for protons to lose 63% of their transverse magnetization.

Now that we have explained T1 and T2 signals, we will move on to an example of meningioma to help understand the MR signals. Please note physics MR is a wide topic and we have not described concepts like diffusion , echo, gradient and TOF sequences,as well as artifacts . Contrast works by shifting the T1 curve to the left (shortens the T1 curve).

Meningioma on MR

In the above image, the left image is T1 weighted image, and the meningioma is difficult to identify. After administration of IV contrast (gadolinium), the T1 curve of meningioma shifts to the left and is seen as an enhancing lesion on the right image.

CT physics — Hounsfield units (HU)

CT images are not developed based on the hydrogen magic for MR. Instead , they use Xrays, a form of radiation focussed on part of the body being imaged. Different parts of the body attenuate (weaken) the Xray beam at different rates. Therefore to standardize the different values of attenuation, the attenuation coefficient is used to measure how easily a beam penetrates a a material. Since we know the attenuation of water to be zero, then the HU is calculated relative to water.

HU calculation

Therefore by measuring a region of interest, you can calculate the HU of a lesion and determine what it is — for example a fat containing lesion or a fluid containing lesion as a cyst.

http://www.odec.ca/projects/2007/kimj7j2/index_files/Page1674.htm

Tying it together (Deep Learning)

Image segmentation is the process of drawing contours of an object to delineate its boundaries. In medical imaging, segmentation allows surface or volumetric quantification of a lesion or an anatomic structure. U-Net is a specialized deep learning model architecture that allows automatic segmentation. To train this kind of model, you need to show multiple images iteratively to the model with the associated segmented areas, often drawn manually by radiologists.

U-Net: Convolutional Networks for Biomedical Image Segmentation

From : U-Net: Convolutional Networks for Biomedical Image Segmentation (https://arxiv.org/pdf/1505.04597.pdf)

U-net is designed intrinsically to perform very well with a small number of training cases. The architecture progressively encodes the initial image in a numerically squeezed representation, literally at the bottom of the U. Then, this bottom representation is decoded symmetrically to generate the automatic segmented area, also defined as a mask, as the final output. The training is optimized to minimize the difference between the proposed segmented area with the manually segmented ground-truth area. This property that allows an efficient training with a very small number of cases is particularly useful in medical imaging where expert manual segmentation is very costly. For meningioma segmentation, this technique could intuitively be tried on a post-gadolinium MRI sequence which is the sequence with the highest signal to noise ratio according to this pathology (i.e. meningiomas typically show avid enhancement, and bright signal, relative to the background).

Hopefully this clarifies the idea of signals across CT and MR and helps you on your deep learning path. Consider joining our community that intersects radiology and imaging sciences with deep learning scientists here : https://tribe.radai.club.

We discuss deep learning on our monthly journal club archived here : https://youtu.be/xoUpKjxbeC0 . Our next journal club is on 22nd February , 8 pm EST with a presentation from Timnit Gebru on “Using deep learning and Google Street View to estimate the demographic makeup of neighborhoods across the United States” — https://register.gotowebinar.com/register/8696551324404512003

Say thanks and references

  1. Numerous images of physics lecture were obtained from one of my best teachers at Indiana University Radiology Department — Dr. Isaac Wu
  2. Read more about MR physics here — MRI made easy

Published by HackerNoon on 2018/02/07