Artificial intelligence, health care, and ethics. A conference at Newcastle University

Location: Old Library Building, room 2.22

Time: 8 September 2023

Funded by:

Programme

 

TimeSpeakersTitle
11-11.30Stephen McGough and Jan DeckersIntroduction
11.30-12.00Adam Brandt & Spencer HazelGhosting the shell: exorcising the AI persona from voice user interfaces
12.00-12.30Emily PostanGood categories & uncanny kinds: ethical implications of health categories generated by machine learning
12.30-13.30Lunch breakLunch break
13.30-14.00Emma GordonMoral expertise and Socratic AI
14.00-14.30Shauna ConcannonLiving well with machines: critical perspectives on communicative AI
14.30-15.00Angus RobsonMoral communities in healthcare and the disruptive potential of advanced automation
15.00-15.30Andrew McStayAutomating Empathy and the Public Good: The Unique Case of Health
15.30-16.00Koji TachibanaVirtue and humanity in an age of symbiosis with AI
16.00-16.30Jamie WebbTrust, machine learning, and healthcare resource allocation
16.30-17.00Jan Deckersclosure
18.00 onwardsSocial dinner for presenters 

Abstracts

Stephen McGough and Jan Deckers: Introduction

As a scholar researching the areas of machine learning, big data, and energy efficient computing, Stephen will provide a sketch of what AI actually is. Jan will introduce the day, looking back at the presentations at the first event at Chiba University, and looking ahead at the presentations that will follow today.

 

Adam Brandt & Spencer Hazel: Ghosting the shell: exorcising the AI persona from voice user interfaces

Voice-based Conversational User Interfaces (CUIs or VUIs) are becoming increasingly ubiquitous in our everyday interactions with service providers and other organisations. To provide a more naturalistic user experience, Conversation Designers often seek to develop in their conversational agents what has been glossed as humanlikeness, namely design features that serve to anthropomorphise the machine. Guidelines suggest for example that designers invest time in developing a persona for the conversational agent, selecting an appropriate persona for the target user, adapting conversational style to the user (for discussion, Murad et al., 2019). Such anthropomorphic work may mitigate somewhat for the challenges of requiring users to behave as if they are speaking with a human rather than at a machine. However, in attempting to trigger a suspension the disbelief on the part of the user, allowing them to entertain the idea that what they are speaking at has some kind of sentience, raises a number of ethical concerns, especially where users may end up divulging more than they would like.

This presentation reports on work carried out by the authors in partnership with healthcare start-up Ufonia. The company has developed a voice assistant, Dora, for carrying out routine clinical conversations across a number of hospital trusts (see Brandt et al. 2023). Using insights and methods from Conversation Analysis, the collaboration has worked to ensure that these conversation-framed user activities achieve the institutional aims of the phone-call, while providing as the same time an unchallenging experience for the user. Conscious of the ethical dimension to which this gives rise, we discuss here how we help to pare down the importance of agent persona (managing the risk of users treating the agent as a person), while curating a more natural user experience by modelling the speech synthesis on the normative patterns of everyday (institutional) talk.

References:

Brandt, A., Hazel, S., Mckinnon, R., Sideridou, K., Tindale, J. & Ventoura, N. (2023) ‘From Writing Dialogue to Designing Conversation: Considering the potential of Conversation Analysis for Voice User Interfaces’, in [Online]. 2023

Murad, C., Munteanu, C., Cowan, B.R. & Clark, L. (2019) Revolution or Evolution? Speech Interaction and HCI Design Guidelines. IEEE Pervasive Computing. 18 (2), 33–45.

 

Emily Postan, Good categories & uncanny kinds: ethical implications of health categories generated by machine learning

One area in which there is particular interest is health applications of machine learning (ML) is in assisting not only in the detection of disease or disease risk factors, but also by generating new or refined diagnostic, prognostic, risk, and treatment response categories. Deep learning (DL), a powerful subset of ML, potentially offers marked opportunities in this area. This paper interrogates the ways in which uses of DL in this way may result not only in the classification of images, risks, or diagnoses, but also in reconfigured and novel ways of categorising people. It asks why categorisation by AI – and deep learning algorithms in particular – might have ethically significant consequences for the people thus categorised. The paper approaches these questions through the lens of philosophical treatments of ‘human kinds’. It asks to what extent AI-generated categorisations function, or could come to function, as human kinds. More specifically, it seeks to characterise how human kinds predicated on machine learning algorithms might differ, and differ in ethically significant ways, from existing human kinds that come about through social and historical processes and practices. It explores the potential impacts of AI-generated kinds on members’ experiences of inhabiting and exercising agency over their own categorisations. As such this paper pursues a line of ethical inquiry that is distinct from, though complementary to, more familiar concerns about the risks of error and discrimination arising from AI-enabled decision-making in healthcare. The paper concludes that while the impacts of machine learning-generated person-categorisations may not be unequivocally negative, their potential to alter our identity practices and group memberships needs to be weighed against the assumed health dividends and accounted for in the development and regulation of trustworthy AI applications in healthcare.

 

Emma Gordon: Moral Expertise and Socratic AI

A central research question in social epistemology concerns the nature of expertise and the related question of how expertise in various domains (epistemic, moral, etc.) is to be identified (e.g., Goldman 2001; Quast 2018; Stichter 2015; Goldberg 2009). Entirely apart from this debate, recent research in bioethics considers whether and to what extent cognitive scaffolding via the use of artificial intelligence might be a viable non-pharmaceutical form of moral enhancement (e.g., Lara and Deckers 2020; Lara 2021; Gordon 2022; Rodríguez-López and Rueda 2023). A particularly promising version of this strategy takes the form of ‘Socratic AI’ — viz., an ‘AI assistant’ that engages in Socratic dialogue with users to assist in ethical reasoning non-prescriptively. My aim will be to connect these disparate strands of work in order to investigate whether Socratic-AI assisted moral enhancement is compatible with manifesting genuine moral expertise, and how the capacity of Socratic AI to improve moral reasoning might influence our criteria for identifying moral experts.

 

Shauna Concannon: Living well with machines: critical perspectives on communicative AI

Artificial Intelligence (AI) is having an ever-greater impact on how we communicate and interact. Over the last few years, smart speakers, virtual personal assistants and other forms of ‘communicative AI’ have become increasingly popular and chatbots designed to support wellbeing and perform therapeutic functions are already available and widely used. In the context of health and social care, attention has begun to focus on whether an AI system can perform caring duties or offer companionship. As machines are positioned in increasing relational roles, what are the societal and ethical implications and should these interactions be modelled on human-human interaction?

 

In this talk, I will review recent developments in communicative AI, ranging from empathetic chatbots to storytelling applications tailored for children. Through this exploration, I will examine key risks and the potential for harm that these innovations may entail, and consider the implications that arise due to the ontological differences between human-machine and human-human communication.  Finally, I will consider what is required to guide more intentional design and evaluation of these systems, with a more central focus on interaction, moral care and social conduct.

 

Angus Robson: Moral communities in healthcare and the disruptive potential of advanced automation

The importance of healthcare organizations as moral communities is increasingly supported in recent research. Advanced automation has potential to impact such communities both positively and negatively. This study looks at one specific aspect of such potential impacts, which is here termed second-person displacement. Using Stephen Darwall’s idea of second-person standpoint, interpersonal events are proposed as a basic condition of the moral community which is threatened by automation in certain conditions of second-person displacement. This threat occurs in two particular respects, pervasiveness and humanness. If these two kinds of threat are understood and acknowledged, they can be mitigated by good design and planning. Some principles are suggested to assist strategies for responsible management of increasingly advanced automation, including protection of critical contact, building the resilience of the moral community and resisting deception.

 

Andrew McStay: Automating Empathy and the Public Good: The Unique Case of Health

Once off-limits, the boundaries of personal space and the body are being probed by emotional AI and technologies that simulate properties of empathy. This is occurring in worn, domestic, quasi-private and public capacities. The significance is environmental, in that overt and ambient awareness of intimate dimension of human life raises questions about human-system interaction, privacy, security, civic life, influence, regulation, and moral limits regarding the datafication of intimate dimensions of human life. This talk will discuss the historical context to these technologies, current technological development, existing and emergent use cases, whether remote profiling of emotion and datafied subjectivity is acceptable, and (if so) on what terms. Problematic in method and application, especially commercial uses, the use of automated empathy in healthcare is observed to raise unique questions. Tabling some of the legal and moral concerns, the talk will also flag citizen opinion obtained by the Emotional AI Lab (that McStay directs) on use of automated empathy and emotional AI in healthcare. This will be argued to be instructive for debate on the moral limits of automating empathy.

 

Koji Tachibana: Virtue and humanity in an age of symbiosis with AI

The social implementation of AI will accelerate in the near future. In such a society, we humans will communicate, work and live together with AI. What is human virtue or humanity in such a symbiotic way of living with AI? Examining several discussions of possible social implementations, I consider this question and argue that symbiotic life with AI is an excellent opportunity to understand humanity because we can perceive an absolute difference between humans and AI.

 

Jamie Webb: Trust, machine learning, and healthcare resource allocation

One use of machine learning in healthcare is the generation of clinical predictions which may be used in healthcare resource allocation decisions. For example, a machine learning algorithm used to predict post-transplant benefit could be involved in decisions regarding who is prioritised for transplant. When conducting interviews with patients with lived experience of high stakes algorithmic resource allocation, patients have occasionally expressed to me that they trusted how prioritisation was determined because they trusted the clinical staff involved in their care. This trust may be grounded in experience of compassionate and competent care. However, these demonstrations of trustworthiness may have nothing to do with the way algorithmic resource allocation decisions are made. Is this a problem? This presentation will explore this question with the use of philosophical theories of trust and trustworthiness, considering the particular challenges machine learning might bring to patient trust and the clinical staff-patient relationship.

Leave a Reply

Your email address will not be published. Required fields are marked *