Whitening-Aided Deep Learning for Radar-Based Human Activity Recognition

Whitening-Aided Deep Learning for Radar-Based Human Activity Recognition

2023, Oct 28    

A new way to make radar “see” human movement accurately, privately, and intelligently.


Summary

This work, published in Sensors (MDPI, 2023), introduces a deep learning framework that enhances radar-based human activity recognition (HAR), a privacy-preserving alternative to camera-based systems.
We developed whitening-aided convolutional neural networks (CNNs) that outperform conventional architectures by decorrelating feature activations, leading to higher accuracy and better generalization.

Traditional CNNs rely on Batch Normalization (BN), which normalizes activations but does not remove correlations between features. Our whitening-based approach goes a step further, improving representation quality and interpretability of radar-based learning systems.


The Idea Behind Whitening

Radar captures micro-Doppler signatures, fine-grained frequency shifts caused by body movements. Each activity, walking, bending, sitting, or falling, produces a unique time–frequency pattern.

We modified the normalization process inside CNNs using two methods:

  1. IterNorm Whitening: Efficiently whitens (decorrelates) activations using Newton’s method without expensive matrix decomposition.
  2. Whitening + Rotation: Adds a rotational alignment module that orients latent-space axes along specific activity classes, enabling interpretable latent representations.

This idea allows the network to see motion features in a more independent and structured way.


Dataset and Model

We used the University of Glasgow Radar Activity Dataset, which includes six everyday activities:

  • Walking
  • Sitting down
  • Standing up
  • Drinking water
  • Bending to pick up an object
  • Falling

Each measurement was captured using a 5.8 GHz FMCW radar and converted into a 75×75 grayscale micro-Doppler spectrogram.


Distinct motion patterns for each activity, time on the x-axis, Doppler frequency on the y-axis.

Our 3-layer CNN consisted of convolution, max-pooling, and normalization blocks (using BN or whitening), followed by a fully connected classifier.


Each normalization block can use BatchNorm, IterNorm Whitening, or IterNorm + Rotation.

Experimental Setup

  • Loss function: Cross-entropy
  • Optimizer: Stochastic Gradient Descent (SGD)
  • Batch size: 10
  • Training epochs: 30 (IterNorm), +5 fine-tuning (Rotation model)
  • Evaluation: Mean accuracy over 30 randomized train/test splits (20–80%, 50–50%, 80–20%)

Results

Model Type 50/50 Train/Test Accuracy Notes
Baseline CNN (BN) ~82.5% Standard normalization only
Whitening-Aided Model 1 (IterNorm) ~91.6% Stronger class separation
Whitening-Aided Model 2 (IterNorm + Rotation) 93–95% Best overall performance

Even when tested on unseen subjects, whitening-aided models maintained higher accuracy (92.6%) versus the baseline (85.2%).


Replacing BatchNorm with Whitening greatly reduces class confusion, especially between similar motions.

Understanding the Latent Space

One of the most fascinating aspects of Whitening + Rotation is latent alignment, each class becomes strongly associated with a unique direction in feature space.

In deeper layers, activations for “falling” and “walking” are clearly separated, making the model more explainable.


Each row shows the most activated regions for specific activity classes across CNN layers.

Key Takeaways

  • Whitening outperforms BatchNorm by removing correlations between activations.
  • Rotational alignment enhances interpretability, each class is aligned along distinct feature directions.
  • Even a single whitening layer in a CNN can improve performance, with deeper layers yielding the greatest gains.
  • Applicable beyond human activity recognition, to healthcare radar, contactless monitoring, and smart home safety systems.

Citation

Sadeghi Adl, Z.; Ahmad, F.
“Whitening-Aided Learning from Radar Micro-Doppler Signatures for Human Activity Recognition.”
Sensors, 23(17), 7486 (2023).
👉 Read the full paper on MDPI