Deep Learning Powered Classification and Analysis of Standard Heart Views in Ultrasound Imaging for Enhanced and Quicker Cardiac Diagnosis

Presenter Information

Abhirama Sonny

Category

High School

Department

Biomedical Health Science

Student Status

High School

Research Advisor

Dr. Marissa Sparacin

Document Type

Event

Location

Student Center Ballroom

Start Date

10-4-2025 2:00 PM

End Date

10-4-2025 4:00 PM

Description

Cardiovascular diseases (CVDs) account for approximately 17.9 million deaths annually, emphasizing the need for more efficient and accurate diagnostic tools. Echocardiography, the primary imaging modality for cardiac assessment, relies on manual interpretation, introducing variability and diagnostic delays. This project leverages deep learning to enhance echocardiographic analysis and automate key diagnostic tasks. A Convolutional Neural Network (CNN) and a Vision Transformer (ViT) were trained on 1.4 million echocardiogram videos, achieving a validation accuracy of 93.7%. The models extract critical cardiac parameters, including Ejection Fraction (EF), End-Systolic Volume (ESV), End-Diastolic Volume (EDV), Tricuspid Annular Plane Systolic Excursion (TAPSE), and many more, with mean absolute errors (MAE) of 3.1%, 4.5 mL, 6.8 mL, and 1.2 mm, respectively, compared to human annotations. To improve interpretability, a Retrieval-Augmented Generation (RAG) model integrates domain-specific knowledge to generate structured diagnostic reports. A vector search algorithm retrieves similar cases from an annotated database, allowing a comparison of patient metrics against historical trends. Additionally, a Large Language Model (LLM) enables interactive querying, helping clinicians refine differential diagnoses and contextualize findings. This approach demonstrates the potential of artificial intelligence to streamline echocardiographic analysis, reduce diagnostic variability, and improve accessibility to cardiovascular disease detection. The primary goal is to help general health professionals, who may lack cardiology expertise, interpret ultrasound images during emergencies while also assisting cardiovascular specialists in making diagnoses. This approach highlights the potential of artificial intelligence in echocardiographic analysis.

This document is currently not available here.

Share

COinS
 
Apr 10th, 2:00 PM Apr 10th, 4:00 PM

Deep Learning Powered Classification and Analysis of Standard Heart Views in Ultrasound Imaging for Enhanced and Quicker Cardiac Diagnosis

Student Center Ballroom

Cardiovascular diseases (CVDs) account for approximately 17.9 million deaths annually, emphasizing the need for more efficient and accurate diagnostic tools. Echocardiography, the primary imaging modality for cardiac assessment, relies on manual interpretation, introducing variability and diagnostic delays. This project leverages deep learning to enhance echocardiographic analysis and automate key diagnostic tasks. A Convolutional Neural Network (CNN) and a Vision Transformer (ViT) were trained on 1.4 million echocardiogram videos, achieving a validation accuracy of 93.7%. The models extract critical cardiac parameters, including Ejection Fraction (EF), End-Systolic Volume (ESV), End-Diastolic Volume (EDV), Tricuspid Annular Plane Systolic Excursion (TAPSE), and many more, with mean absolute errors (MAE) of 3.1%, 4.5 mL, 6.8 mL, and 1.2 mm, respectively, compared to human annotations. To improve interpretability, a Retrieval-Augmented Generation (RAG) model integrates domain-specific knowledge to generate structured diagnostic reports. A vector search algorithm retrieves similar cases from an annotated database, allowing a comparison of patient metrics against historical trends. Additionally, a Large Language Model (LLM) enables interactive querying, helping clinicians refine differential diagnoses and contextualize findings. This approach demonstrates the potential of artificial intelligence to streamline echocardiographic analysis, reduce diagnostic variability, and improve accessibility to cardiovascular disease detection. The primary goal is to help general health professionals, who may lack cardiology expertise, interpret ultrasound images during emergencies while also assisting cardiovascular specialists in making diagnoses. This approach highlights the potential of artificial intelligence in echocardiographic analysis.