Postgraduate research project

Creating explainable AI for health with confidence

Funding
Competition funded View fees and funding
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

ÃÛÌÒTV the project

This project will develop advanced machine learning methods to transform heterogeneous, routinely collected medical data into explainable, uncertainty-aware insights for healthcare decision-making. Combining multimodal data, longitudinal modelling, and personalised uncertainty quantification, the project tackles critical challenges in digital health, improving disease prediction, monitoring, and communication of confidence in clinical decisions.

Medical data are often routinely recorded each time we visit a hospital and our general clinician. Such data contains information that can be utilised to understand diseases better. The challenge lies in the fact that data come with various quality problems, their recording is irregular, and data are of various formats, ranging from blood pressure to images, such as those of the brain or retina. Additionally, humans are inherently heterogeneous. This project will tackle the mentioned challenges. It will propose and develop methods on the intersection of deep neural nets and longitudinal predictive modelling to create automated monitoring that is explainable, uncertainty-aware and robust toward more accurate decision making. 

You will work on critical challenges, including data heterogeneity from various data sources, and personalisation to derive explainable artificial intelligence (AI) that accurately expresses its own confidence. Methods will include multimodal machine learning, image processing, and personalised uncertainty quantification. You will work with a multidisciplinary team at the forefront of digital healthcare research, thus contributing to the creation of impactful and user-centred health decision-making. This project also provides an opportunity to work with the public to understand their attitude toward decisions and uncertainty. Research directions may include information-theoretic approaches to uncertainty quantification, communication of uncertainty to the users, explainable AI, active learning approaches of Machine Learning and collaboration with health and technology experts for broader healthcare impact. 

This project is ideal if you're passionate about support-decision systems, machine learning, and personalised healthcare. Outcomes will contribute to better disease prediction, improved health management, and more responsive healthcare in a digital era. You will have the opportunity to collaborate with other researchers in the Digital Health and Biomedical Engineering, as well as researchers from the Faculty of Medicine, contributing to high-impact journals and conferences. You will have access to high-performance computers.