Concept Guide

1 Addressing the Memory Bottleneck in AI Model Training for Healthcare
Whitepaper
Addressing the Memory Bottleneck in AI
Model-Training for Healthcare
Executive Summary
Intel, Dell, and researchers at the University of Florida have
collaborated to help data scientists optimize the analysis of
healthcare data sets using artificial intelligence (AI).
Healthcare workloads, particularly in medical imaging, require
more memory usage than other AI workloads because they
often use higher resolution 3D images.
In this white paper, we demonstrate how Intel-optimized TensorFlow* on a Dell EMC PowerEdge
server equipped with 2
nd
Generation Intel Xeon Scalable Processors with large system memory
allows for the training of memory-intensive AI/deep-learning models in a scale-up server
configuration. We believe our work represents the first training of a deep neural network having
large memory footprint (~ 1 TB) on a single-node server. We recommend this configuration to
users who wish to develop large, state-of-the-art AI models but are currently limited by memory.
Key Takeaways
Near-terabyte
memory footprint in
3D model training
3.4x speedup with
Deep Neural Network
Library optimizations

Summary of content (15 pages)