{"id":73,"date":"2023-08-23T10:25:11","date_gmt":"2023-08-23T09:25:11","guid":{"rendered":"https:\/\/blogs.ncl.ac.uk\/jandeckers\/?p=73"},"modified":"2023-08-23T10:25:12","modified_gmt":"2023-08-23T09:25:12","slug":"artificial-intelligence-health-care-and-ethics-a-conference-at-newcastle-university","status":"publish","type":"post","link":"https:\/\/blogs.ncl.ac.uk\/jandeckers\/2023\/08\/23\/artificial-intelligence-health-care-and-ethics-a-conference-at-newcastle-university\/","title":{"rendered":"<strong>Artificial intelligence, health care, and ethics. <\/strong><strong>A conference at Newcastle University<\/strong>"},"content":{"rendered":"\n<p>Location: Old Library Building, room 2.22<\/p>\n\n\n\n<p>Time: 8 September 2023<\/p>\n\n\n\n<p>Funded by:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"490\" height=\"163\" src=\"https:\/\/blogs.ncl.ac.uk\/jandeckers\/files\/2023\/08\/image.png\" alt=\"\" class=\"wp-image-78\" srcset=\"https:\/\/blogs.ncl.ac.uk\/jandeckers\/files\/2023\/08\/image.png 490w, https:\/\/blogs.ncl.ac.uk\/jandeckers\/files\/2023\/08\/image-300x100.png 300w\" sizes=\"auto, (max-width: 490px) 100vw, 490px\" \/><\/figure>\n\n\n\n<h1 class=\"wp-block-heading\">Programme<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">&nbsp;<\/h1>\n\n\n\n<figure class=\"wp-block-table\"><table><tbody><tr><td><strong>Time<\/strong><\/td><td><strong>Speakers<\/strong><\/td><td><strong>Title<\/strong><\/td><\/tr><tr><td>11-11.30<\/td><td>Stephen McGough and Jan Deckers<\/td><td>Introduction<\/td><\/tr><tr><td>11.30-12.00<\/td><td>Adam Brandt &amp; Spencer Hazel<\/td><td>Ghosting the shell: exorcising the AI persona from voice user interfaces<\/td><\/tr><tr><td>12.00-12.30<\/td><td>Emily Postan<\/td><td>Good categories &amp; uncanny kinds: ethical implications of health categories generated by machine learning<\/td><\/tr><tr><td>12.30-13.30<\/td><td>Lunch break<\/td><td>Lunch break<\/td><\/tr><tr><td>13.30-14.00<\/td><td>Emma Gordon<\/td><td>Moral expertise and Socratic AI<\/td><\/tr><tr><td>14.00-14.30<\/td><td>Shauna Concannon<\/td><td>Living well with machines: critical perspectives on communicative AI<\/td><\/tr><tr><td>14.30-15.00<\/td><td>Angus Robson<\/td><td>Moral communities in healthcare and the disruptive potential of advanced automation<\/td><\/tr><tr><td>15.00-15.30<\/td><td>Andrew McStay<\/td><td>Automating Empathy and the Public Good: The Unique Case of Health<\/td><\/tr><tr><td>15.30-16.00<\/td><td>Koji Tachibana<\/td><td>Virtue and humanity in an age of symbiosis with AI<\/td><\/tr><tr><td>16.00-16.30<\/td><td>Jamie Webb<\/td><td>Trust, machine learning, and healthcare resource allocation<\/td><\/tr><tr><td>16.30-17.00<\/td><td>Jan Deckers<\/td><td>closure<\/td><\/tr><tr><td>18.00 onwards<\/td><td>Social dinner for presenters<\/td><td>&nbsp;<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h1 class=\"has-large-font-size wp-block-heading\"><a><\/a>Abstracts<\/h1>\n\n\n\n<h1 class=\"has-medium-font-size wp-block-heading\"><strong>Stephen McGough and Jan Deckers: Introduction<\/strong><\/h1>\n\n\n\n<h1 class=\"wp-block-heading\"><strong>As a scholar researching the areas of machine learning, big data, and energy efficient computing, Stephen will provide a sketch of what AI actually is. Jan will introduce the day, looking back at the presentations at the first event at Chiba University, and looking ahead at the presentations that will follow today.<\/strong><\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">&nbsp;<\/h1>\n\n\n\n<h1 class=\"has-medium-font-size wp-block-heading\"><strong>Adam Brandt &amp; Spencer Hazel: Ghosting the shell: exorcising the AI persona from voice user interfaces<\/strong><\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">Voice-based Conversational User Interfaces (CUIs or VUIs) are becoming increasingly ubiquitous in our everyday interactions with service providers and other organisations. To provide a more naturalistic user experience, Conversation Designers often seek to develop in their conversational agents what has been glossed as humanlikeness, namely design features that serve to anthropomorphise the machine. Guidelines suggest for example that designers invest time in developing a persona for the conversational agent, selecting an appropriate persona for the target user, adapting conversational style to the user (for discussion, Murad et al., 2019). Such anthropomorphic work may mitigate somewhat for the challenges of requiring users to behave as if they are speaking with a human rather than at a machine. However, in attempting to trigger a suspension the disbelief on the part of the user, allowing them to entertain the idea that what they are speaking at has some kind of sentience, raises a number of ethical concerns, especially where users may end up divulging more than they would like.<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">This presentation reports on work carried out by the authors in partnership with healthcare start-up Ufonia. The company has developed a voice assistant, Dora, for carrying out routine clinical conversations across a number of hospital trusts (see Brandt et al. 2023). Using insights and methods from Conversation Analysis, the collaboration has worked to ensure that these conversation-framed user activities achieve the institutional aims of the phone-call, while providing as the same time an unchallenging experience for the user. Conscious of the ethical dimension to which this gives rise, we discuss here how we help to pare down the importance of agent persona (managing the risk of users treating the agent as a person), while curating a more natural user experience by modelling the speech synthesis on the normative patterns of everyday (institutional) talk.<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">References:<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">Brandt, A., Hazel, S., Mckinnon, R., Sideridou, K., Tindale, J. &amp; Ventoura, N. (2023) &#8216;From Writing Dialogue to Designing Conversation: Considering the potential of Conversation Analysis for Voice User Interfaces&#8217;, in [Online]. 2023<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">Murad, C., Munteanu, C., Cowan, B.R. &amp; Clark, L. (2019) Revolution or Evolution? Speech Interaction and HCI Design Guidelines. IEEE Pervasive Computing. 18 (2), 33\u201345.<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">&nbsp;<\/h1>\n\n\n\n<h1 class=\"has-medium-font-size wp-block-heading\"><strong>Emily Postan, Good categories &amp; uncanny kinds: ethical implications of health categories generated by machine learning<\/strong><\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">One area in which there is particular interest is health applications of machine learning (ML) is in assisting not only in the detection of disease or disease risk factors, but also by generating new or refined diagnostic, prognostic, risk, and treatment response categories. Deep learning (DL), a powerful subset of ML, potentially offers marked opportunities in this area. This paper interrogates the ways in which uses of DL in this way may result not only in the classification of images, risks, or diagnoses, but also in reconfigured and novel ways of categorising people. It asks why categorisation by AI \u2013 and deep learning algorithms in particular \u2013 might have ethically significant consequences for the people thus categorised. The paper approaches these questions through the lens of philosophical treatments of \u2018human kinds\u2019. It asks to what extent AI-generated categorisations function, or could come to function, as human kinds. More specifically, it seeks to characterise how human kinds predicated on machine learning algorithms might differ, and differ in ethically significant ways, from existing human kinds that come about through social and historical processes and practices. It explores the potential impacts of AI-generated kinds on members\u2019 experiences of inhabiting and exercising agency over their own categorisations. As such this paper pursues a line of ethical inquiry that is distinct from, though complementary to, more familiar concerns about the risks of error and discrimination arising from AI-enabled decision-making in healthcare. The paper concludes that while the impacts of machine learning-generated person-categorisations may not be unequivocally negative, their potential to alter our identity practices and group memberships needs to be weighed against the assumed health dividends and accounted for in the development and regulation of trustworthy AI applications in healthcare.<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">&nbsp;<\/h1>\n\n\n\n<h1 class=\"has-medium-font-size wp-block-heading\"><strong>Emma Gordon: Moral Expertise and Socratic AI<\/strong><\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">A central research question in social epistemology concerns the nature of expertise and the related question of how expertise in various domains (epistemic, moral, etc.) is to be identified (e.g., Goldman 2001; Quast 2018; Stichter 2015; Goldberg 2009). Entirely apart from this debate, recent research in bioethics considers whether and to what extent cognitive scaffolding via the use of artificial intelligence might be a viable non-pharmaceutical form of moral enhancement (e.g., Lara and Deckers 2020; Lara 2021; Gordon 2022; Rodr\u00edguez-L\u00f3pez and Rueda 2023). A particularly promising version of this strategy takes the form of \u2018Socratic AI\u2019 \u2014 viz., an \u2018AI assistant\u2019 that engages in Socratic dialogue with users to assist in ethical reasoning non-prescriptively. My aim will be to connect these disparate strands of work in order to investigate whether Socratic-AI assisted moral enhancement is compatible with manifesting genuine moral expertise, and how the capacity of Socratic AI to improve moral reasoning might influence our criteria for identifying moral experts.<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">&nbsp;<\/h1>\n\n\n\n<h1 class=\"has-medium-font-size wp-block-heading\"><strong>Shauna Concannon: Living well with machines: critical perspectives on communicative AI<\/strong><\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">Artificial Intelligence (AI) is having an ever-greater impact on how we communicate and interact. Over the last few years, smart speakers, virtual personal assistants and other forms of \u2018communicative AI\u2019 have become increasingly popular and chatbots designed to support wellbeing and perform therapeutic functions are already available and widely used. In the context of health and social care, attention has begun to focus on whether an AI system can perform caring duties or offer companionship. As machines are positioned in increasing relational roles, what are the societal and ethical implications and should these interactions be modelled on human-human interaction?<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">&nbsp;<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">In this talk, I will review recent developments in communicative AI, ranging from empathetic chatbots to storytelling applications tailored for children. Through this exploration, I will examine key risks and the potential for harm that these innovations may entail, and consider the implications that arise due to the ontological differences between human-machine and human-human communication.&nbsp; Finally, I will consider what is required to guide more intentional design and evaluation of these systems, with a more central focus on interaction, moral care and social conduct.<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">&nbsp;<\/h1>\n\n\n\n<h1 class=\"has-medium-font-size wp-block-heading\"><strong>Angus Robson: Moral communities in healthcare and the disruptive potential of advanced automation<\/strong><\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">The importance of healthcare organizations as moral communities is increasingly supported in recent research. Advanced automation has potential to impact such communities both positively and negatively. This study looks at one specific aspect of such potential impacts, which is here termed second-person displacement. Using Stephen Darwall\u2019s idea of second-person standpoint, interpersonal events are proposed as a basic condition of the moral community which is threatened by automation in certain conditions of second-person displacement. This threat occurs in two particular respects, pervasiveness and humanness. If these two kinds of threat are understood and acknowledged, they can be mitigated by good design and planning. Some principles are suggested to assist strategies for responsible management of increasingly advanced automation, including protection of critical contact, building the resilience of the moral community and resisting deception.<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">&nbsp;<\/h1>\n\n\n\n<h1 class=\"has-medium-font-size wp-block-heading\"><strong>Andrew McStay: Automating Empathy and the Public Good: The Unique Case of Health<\/strong><\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">Once off-limits, the boundaries of personal space and the body are being probed by emotional AI and technologies that simulate properties of empathy. This is occurring in worn, domestic, quasi-private and public capacities. The significance is environmental, in that overt and ambient awareness of intimate dimension of human life raises questions about human-system interaction, privacy, security, civic life, influence, regulation, and moral limits regarding the datafication of intimate dimensions of human life. This talk will discuss the historical context to these technologies, current technological development, existing and emergent use cases, whether remote profiling of emotion and datafied subjectivity is acceptable, and (if so) on what terms. Problematic in method and application, especially commercial uses, the use of automated empathy in healthcare is observed to raise unique questions. Tabling some of the legal and moral concerns, the talk will also flag citizen opinion obtained by the Emotional AI Lab (that McStay directs) on use of automated empathy and emotional AI in healthcare. This will be argued to be instructive for debate on the moral limits of automating empathy.<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">&nbsp;<\/h1>\n\n\n\n<h1 class=\"has-medium-font-size wp-block-heading\"><strong>Koji Tachibana: Virtue and humanity in an age of symbiosis with AI<\/strong><\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">The social implementation of AI will accelerate in the near future. In such a society, we humans will communicate, work and live together with AI. What is human virtue or humanity in such a symbiotic way of living with AI? Examining several discussions of possible social implementations, I consider this question and argue that symbiotic life with AI is an excellent opportunity to understand humanity because we can perceive an absolute difference between humans and AI.<\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">&nbsp;<\/h1>\n\n\n\n<h1 class=\"has-medium-font-size wp-block-heading\"><strong>Jamie Webb: Trust, machine learning, and healthcare resource allocation<\/strong><\/h1>\n\n\n\n<h1 class=\"wp-block-heading\">One use of machine learning in healthcare is the generation of clinical predictions which may be used in healthcare resource allocation decisions. For example, a machine learning algorithm used to predict post-transplant benefit could be involved in decisions regarding who is prioritised for transplant. When conducting interviews with patients with lived experience of high stakes algorithmic resource allocation, patients have occasionally expressed to me that they trusted how prioritisation was determined because they trusted the clinical staff involved in their care. This trust may be grounded in experience of compassionate and competent care. However, these demonstrations of trustworthiness may have nothing to do with the way algorithmic resource allocation decisions are made. Is this a problem? This presentation will explore this question with the use of philosophical theories of trust and trustworthiness, considering the particular challenges machine learning might bring to patient trust and the clinical staff-patient relationship.<\/h1>\n","protected":false},"excerpt":{"rendered":"<p>Location: Old Library Building, room 2.22 Time: 8 September 2023 Funded by: Programme &nbsp; Time Speakers Title 11-11.30 Stephen McGough and Jan Deckers Introduction 11.30-12.00 Adam Brandt &amp; Spencer Hazel Ghosting the shell: exorcising the AI persona from voice user &hellip; <a href=\"https:\/\/blogs.ncl.ac.uk\/jandeckers\/2023\/08\/23\/artificial-intelligence-health-care-and-ethics-a-conference-at-newcastle-university\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2016,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-73","post","type-post","status-publish","format-standard","hentry","category-uncategorised"],"_links":{"self":[{"href":"https:\/\/blogs.ncl.ac.uk\/jandeckers\/wp-json\/wp\/v2\/posts\/73","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.ncl.ac.uk\/jandeckers\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.ncl.ac.uk\/jandeckers\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.ncl.ac.uk\/jandeckers\/wp-json\/wp\/v2\/users\/2016"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.ncl.ac.uk\/jandeckers\/wp-json\/wp\/v2\/comments?post=73"}],"version-history":[{"count":3,"href":"https:\/\/blogs.ncl.ac.uk\/jandeckers\/wp-json\/wp\/v2\/posts\/73\/revisions"}],"predecessor-version":[{"id":79,"href":"https:\/\/blogs.ncl.ac.uk\/jandeckers\/wp-json\/wp\/v2\/posts\/73\/revisions\/79"}],"wp:attachment":[{"href":"https:\/\/blogs.ncl.ac.uk\/jandeckers\/wp-json\/wp\/v2\/media?parent=73"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.ncl.ac.uk\/jandeckers\/wp-json\/wp\/v2\/categories?post=73"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.ncl.ac.uk\/jandeckers\/wp-json\/wp\/v2\/tags?post=73"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}