
RSNA 2024: What I Brought, What I Learned
First RSNA. Five days of walking, listening, and explaining my work. These are my notes so I don’t forget what actually helped.
Why I went
I came to learn how people really use imaging software, not how they talk about it. My research lives where quantitative MRI meets practical reporting. This trip was about three things:
- Present the poster and get blunt feedback.
- Watch how viewers and reporting tools behave with real cases.
- Bring home a short list of changes that will make our pipeline easier to trust and easier to use.
My RSNA 2024 poster
Title: Deep Learning-Driven Prediction of Pediatric Spinal Cord Injury Severity Using Comprehensive Structural MRI Analysis
Authors: Zahra Sadeghi-Adl, Sara Naghizadehkashani, Laura Krisa, Devon Middleton, Mahdi Alizadeh, Adam Flanders, Scott H. Faro, Feroze B. Mohamed
Affiliations: Temple University; Thomas Jefferson University (Radiology, Physical Therapy)
Study design. We enrolled 61 pediatric participants (20 SCI, 41 TD controls), ages 6-18. Everyone had 3T spinal MRI with high-resolution T2-weighted imaging covering C1-T11. SCI participants also received full ISNCSCI assessments (AIS categories).
Measurements. Using Spinal Cord Toolbox (SCT), I extracted per-level cross-sectional area (CSA) and AP/RL widths.
Modeling. A custom 3D CNN with fusion learned from the structural measures (plus age and height) to (1) classify SCI vs TD and (2) predict AIS severity (A-D). For comparison, I also trained SVM and Random Forest models on the same features.
Results.
- Significant group differences (P < 0.05) for CSA, AP, and RL widths (SCI vs TD).
- SCI vs TD classification: 96.59% accuracy with the CNN (higher than SVM 68.89% and RF 74.00%).
- AIS severity prediction: 94.92% accuracy across categories, with strong precision/recall for AIS A-B.

What I’m taking back to the lab
- Trust signals: ship calibrated confidence with every classification; keep the similar-case panel.
- Viewer-agnostic outputs: ensure overlays and measurements render consistently in zero-install viewers.
What surprised me
- Simple visuals beat ornate ones. A small table with units and thresholds beat three elaborate heatmaps every time.
- Everyone wants the same three exports. PDF for quick review, CSV for analysis, and a tiny zip of overlays to spot-check alignment.
Closing
My first RSNA felt big from the outside and very specific from the inside: make results easy to check, easy to reuse, and hard to misunderstand. That’s the bar I’m taking back to the lab.