You are here: Deep Learning for Precision Health»Deep learning

Deep learning

Deep learning-based analytics for medical image analysis

We develop novel deep learning algorithms that deliver state of the art performance in medical image anlaytics. Our methods streamline workflows improving clinical efficiency and make diagnostics and prognostics more patient specific and accurate, improving patient care. One way that we do this is by developing custom architectures for the given problem, when simple transfer learning yields suboptimal performance.

Diagram Figure

Deep learning with deep convolutional neural networks. This example illustrates the use of the change in white matter integrity of youth football players over the course of a single season of playing football. The convolutional layers of the network learns the hiearchy of image features that allow the prediction of the level of impact exposure the player received that season (which is recorded with helmet-embedded biomechanical sensors). the high performance of the network confirms the strong association between head impact exposure and WM integrity measurable in diffusion MRI.
 

Magnetoencephalography (MEG) is a functional neuroimaging modality that records the magnetic fields induced by neuronal activity. It provides better temporal resolution than fMRI and is less affected by noise from intervening tissues than EEG. However signal from muscle activity, such as eye-blinks, often corrupts the analysis across the brain during source space reconstruction.

We propose a data driven, fully automated approach that extracts statistically independent MEG components and a convolutional neural network to discriminate the artifactual components from neuronal ones, without tedious manual labeling.

Our custom, 10-layer Convolutional Neural Network (CNN) directly labels eye-blink artifacts. The spatial features the CNN learns are visualized using attention mapping, to reveal what it has learned and bolster confidence in the method’s ability to generalize to unseen data. Our method automatically learned the same features the human expert uses. Compared to the previous approaches, our method increases the sensitivity in artifact detection to 97.62%, a huge performance increase (the previous best result was 92.01%) while we also improve specificity to 99.77%.

Diagram Figure

Our custom, 10-layer Convolutional Neural Network (CNN) directly labels eye-blink artifacts.
 

 
 

Learning Entangled Decision Forests for Discriminative Atlases Enabling Rapid High-Resolution Image Interpretation

We develop core machine learning algorithms that have improved learning efficiency, higher test accuracy, use less memory, and faster prediction speeds.One way is by endowing traditional decision forests with deep (cumulative) learning capabilities. This can be achieved through entanglement. This is the use of learned tree structure and intermediate probabilities from nodes in shallow forest levels to help train nodes in deeper forest levels.

Diagram Figure

Deep learning in the decision forest. This example illustrates entanglement which is the use of learned tree structure and intermediate probabilities from nodes in shallow forest levels to help train nodes in deeper forest levels. Shown on the left are the raw image features, in the center are the maximum a-posteriori (MAP) class entanglement feature, which is selected by the forest more and more at deeper levels of tree growth, as shown on the right.
 

We entangle the binary tests applied at each node so that the test can depend on the result of tests applied earlier in the same tree.

One implementation of this is the use of class posteriors of shallower nodes as input features to train deeper nodes. Such a mapClass features learns a discriminative contextual atlas, which provides greater and greater discriminative power as the tree is grown deeper and deeper.