ÎÞÓÇ´«Ã½

NIH Combats Cancer Pain Using Artificial Intelligence

The Challenge: Transforming Scientific Understanding of Cancer Pain

As part of its public-health mission, NIH identified a need to better manage and assess the high rates of chronic pain that cancer patients experience. Whether it’s achieving the next cancer-treatment discovery, orchestrating limited physical and technical resources, or helping caregivers calibrate analgesics for patients, the ability to use data to predict the intensity of pain creates a foundation for informed decisions that ultimately help cancer patients and their families.

Several barriers have limited the capacity of health leaders to assess pain. While useful, the traditional clinical method is subjective, relying on a simple numerical-scale rating that fails to capture the broad spectrum of emotional and physiological pain. Although existing machine-learning studies exist, they are also constrained and limited for the following reasons:

  • Data typically contains only facial images of pain with signals that reveal acute, musculoskeletal pain, such as shoulder pain, which is time-limited and localized.
  • The technicians who manually label traditional machine-learning datasets may under- or overestimate patients’ pain levels, injecting their own viewer bias into the dataset.
  • Cancer patients tend to be particularly stoic, research suggests, meaning that relying solely on cancer patients’ facial expressions may not capture the actual pain intensity.
  • Existing facial-image datasets fail to represent the gender and racial diversity of patients throughout the nation.

A new way to assess and predict cancer patients’ pain would help NIH and its stakeholders across the United States support pain-management strategies that improve patients’ quality of life by strengthening their psychological and functional well-being.

The Approach: Moving Faster from Data to Prediction and Decision

To capture comprehensive data, NIH sought a flexible alternative to conventional assessments of facial images. The more complete the data NIH decision makers could access about cancer patients’ pain, the better equipped they would be to choose the right strategies for improving health outcomes over time.

Helping to realize this goal, ÎÞÓÇ´«Ã½ collaborated with NIH to develop data processing pipelines to extract multimodal pain data and to apply AI techniques for cancer pain prediction and analysis as a key part of the Intelligent Sight and Sound Project. This project is a first-of-its-kind effort to capture, classify, and use precise, detailed information about chronic cancer pain on a large scale. Working alongside NIH experts from the start, our AI researchers collaborated to draft the Institutional Review Board protocol leading to this active clinical trial.

Through our full-scale approach, many different types of explanatory data would come from multiple sources using extracts from videos patients would record in home settings via smartphone, or that caregivers would capture in clinics. In the videos, patients have the opportunity to directly report how pain interferes with their daily experiences of emotion and activity. As a result, we could extract video frames in the visible spectrum; facial images from face-detection models; facial landmarks; audio files and features; acoustic spectrograms and statistics; self-reported psychological and wellness scores from an adapted Brief Pain Inventory; text transcriptions; and more. Over 100 cancer patients would take part. 

The Solution: Centering Patient Experiences to Improve Care

The Intelligent Sight and Sound Project dataset is the first multimodal archive of insights developed in the United States for the prediction of chronic cancer pain. Through this work, we’re helping NIH pioneer research to accelerate AI-based pain models that shed light on diverse, real-life clinical data about patients’ lived experiences of illness, diagnosis, and treatment.

In creating this solution with NIH, our team built seven baseline machine-learning models using conventional and fusion neural networks. The best-performing multimodal tool for chronic pain detection fuses different data types across multiple signals instead of relying on facial images alone. To eliminate disruptions in data gathering due to the COVID-19 pandemic, ÎÞÓÇ´«Ã½ created a smartphone app that enabled NIH to continue to collect patient data. Patients submitted their medical narratives through self-recorded videos, which we extracted into multiple signals.

Today our team continues to train and update new models as patients continue to enter the study, with our work extending to the integration of text and thermal imagery captured in clinic. The data now includes more than 500 smartphone videos and nearly 200,000 video frames, and the amount of data will continue to grow over time. After NIH recruits all patients for the trial and we update models for the entire set of subjects, the models will run in dynamic fashion using full-motion video.

NIH has implemented controls to govern the release of the dataset to safeguard personally identifiable information within facial images. As the nation’s largest repository of cancer pain information, the dataset will be open to AI researchers on a case-by-case basis for specific medical AI research projects. In the future, our research team will investigate the possibility of using multispectral generation of facial pain videos as well as creating a new way to secure sensitive medical data through the application of federated learning to train models.

To learn more about how ÎÞÓÇ´«Ã½ is helping federal agencies harness AI to turn data into powerful insights, visit BoozAllen.com/AI.

Meet the Expert

Get Artificial Intelligence Insights