The HUN-REN Institute of Philosophy cordially invites everybody to the upcoming talk by Steven Gouveia (University of Porto) entitled “Anyway, he looks like a [AI] Doc, doesn't he?”: Ethical Challenges in Medical AI, to be held on the 11th of February, at 11 AM.

The talk and the subsequent discussion will be held in English.

Time: 11 February 2025, Tuesday, 11 AM (CET)

Venue: Institute of Philosophy, 1097, Budapest, Tóth Kálmán Street 4. Floor 7, Room 16.

You can find the event on Facebook via this link.

The event will be broadcast via the Zoom platform:
https://us06web.zoom.us/j/85861998074?pwd=3JfGaafFEU5h91WfKHlxdKksbGL9uB.1

Abstract:

The application of AI in Medicine (AIM) is producing health practices more reliable, accurate and efficient than Traditional Medicine (TM) by assisting partly/totality the medical decision-making, such as the use of deep learning in diagnostic imagery, designing treatment plans or preliminary diagnosis. Yet, most of these AI systems are pure “black-boxes”: the practitioner understands the inputs and outputs of the system but cannot have access to what happens “inside” it and cannot offer an explanation, creating an opaque process that culminates in a Trust Gap in two levels: (a) between patients and the medical experts; (b) between the medical expert and the medical process itself. This creates a “black-box medicine” since the practitioner ought to rely (epistemically) on these AI systems that are more accurate, fast and efficient but are not transparent (epistemically) and do not offer any kind of explanation. In this talk, we aim to analyze a potential solution to address the Trust Gap in AI Medicine. We argue that a specific approach to Explainable AI (xAI) can succeed in reintroducing explanations into the discussion by focusing on how medical reasoning relies on social and abductive explanations and how AI can reproduce, potentially, this kind of abductive reasoning.

steven gouveia talk honlap