Human-Centred Governance, Auditing, Assurance, and Oversight of AI in Healthcare
Advanced deep learning (DL) architectures, including convolutional neural networks (CNNs), transformers, language models (LMs), and agentic AI systems, are increasingly being applied across diverse healthcare contexts, ranging from clinical decision support and workflow automation to mental health care and long-term condition self-management. While advances in explainable AI have improved transparency at the model level, explainability alone is insufficient to ensure accountability, safety, and trust in real-world healthcare deployments, particularly for systems that exhibit autonomy, adapt over time, or interact continuously with patients and professionals.
This special session focuses on the human-centred governance of advanced deep learning architectures, including CNNs, transformers, LMs, and agentic AI systems in healthcare, shifting the field from explainability towards accountability, assurance, and effective oversight. The session will bring together interdisciplinary research addressing how responsibility, decision authority, and control can be meaningfully allocated between humans and AI systems across the AI lifecycle. Topics include, but are not limited to, human-in-the-loop and human-on-the-loop control strategies, AI auditing and assurance frameworks, white-box and hybrid oversight mechanisms, post-deployment monitoring, bias and safety evaluation, and regulatory readiness under emerging frameworks such as the EU AI Act. Contributions may span technical, methodological, and socio-technical perspectives, including case studies from clinical and non-clinical healthcare settings. Emphasis will be placed on operationalising responsible AI principles through practical governance mechanisms rather than abstract ethical claims. By foregrounding human roles in AI oversight and decision-making, this session aims to bridge the gap between AI innovation and trustworthy healthcare adoption, offering actionable insights for researchers,
practitioners, and policymakers alike.
- Topics of interest include, but are not limited to:
Responsible AI/AI Governance academic (expertise in AI assurance, human oversight, or AI safety in
healthcare) - NHS or NHS-affiliated researcher working on AI deployment, evaluation, or digital health governance
- Human–AI interaction / HCI researcher with healthcare applications
- AI governance or regulatory expert (medical AI compliance, assurance cases)
- Researcher working on agentic AI or autonomous systems in healthcare or safety-critical domains
- GenAI safety or auditing researcher (auditing LLMs, post-deployment monitoring, red-teaming)
- Applied healthcare AI researcher with experience evaluating AI systems beyond accuracy metrics in DL
- architectures including CNNs, Transformers, LLMs, etc
Organisers:
- Dr Baidaa Al-Bander, School of Computer Science and Mathematics, Keele University, UK b.al-bander@keele.ac.uk
- Dr Marco Ortolani, School of Computer Science and Mathematics, Keele University, UK
- Matthew Cockayne, School of Computer Science and Mathematics, Keele University, UK
Submission format:
This Special Session welcomes both full length papers (12 pages plus up to 2 pages of references) and abstracts (up to 5 pages including references). Click here for detailed submission guidelines and templates.
Deadline:
All deadlines, including submission deadline and review timeline are the same as the main conference. Please follow this link to see all the Important Dates.
If you have any questions regarding this Special Session, please contact the organiser.