À la Une
Multi-modal Learning for Biomedical Image Analysis and Visualisation
Seminar presented by Prof. Jinman Kim (https://scholar.google.com/citations?user=pLVaMiAAAAAJ&hl=en&oi=ao) Date: Friday, November 8th, 2024 |
Abstract :
Medical imaging plays a pivotal role in patient management in modern healthcare, with most patients who are treated in hospitals undergoing imaging procedures. These technologies can visualise anatomy and function in virtually every organ system in the body in intricate detail. There are numerous medical imaging modalities available; they vary in complexity and sophistication, from plain digital chest X-rays to simultaneous functional and anatomical imaging with positron emission tomography (PET) and computed tomography (CT) imaging (PET-CT). The challenge now is how to maximize the extraction of meaningful information from the images and present meaningful information to the users. There needs to be strategies to harness knowledge from vast image datasets and complementary sources like image sequences, text reports, and genomics. Fortunately, the era of artificial intelligence (AI) is fuelling the growth of smart decision support and analysis tools for medical image analysis. Despite rapid advancements in integrating AI algorithms into clinical decision support systems, we are still in the nascent stages of the AI revolution in medical imaging. This talk will present our research on cross-modal learning to integrate imaging and complementary data for disease modelling, analysis and visualization, aimed at improving the understanding, in an intuitive way.
Short Bio :
Jinman Kim is a Professor of Computer Science at the University of Sydney. He received his PhD from the University of Sydney in 2006 and was an Australian Research Council (ARC) Postdoctoral Research Fellow at the University of Sydney and then a Marie Curie Senior Research Fellow at the University of Geneva prior to joining the University of Sydney in 2013 as a faculty member. He is currently an ARC industry fellow, closely collaborating with his industry partner, Royal Prince Alfred Hospital, to conduct translational research. He co-leads the Faculty of Engineering’s Digital Health Imaging, which is a pillar of the Digital Science Initiatives (DSI), with the vision to combine the Faculty’s expertise in AI applied medical image analysis. He is also the Director of the Telehealth and Technology Center, Nepean hospital. Prof Kim’s research is on the application of machine learning to biomedical image analysis and visualization. His focus is on cross-model and multi-modal learning, which includes biomedical visual-language representations, image-omics, multi-modal data processing, and biomedical mixed reality technologies. He established and leads the Biomedical Data Analysis and Visualisation (BDAV) Lab at the School of Computer Science. He has produced a number of publications in this field and received multiple com