New publication: Deckers, J. Fundamentals of Critical Thinking in Health Care Ethics and Law, Ghent: Owl Press, 2023.

Available from various locations, for example from here.

General information

As cutting-edge technologies continue to reshape the landscape of health care, we are faced with profound ethical and legal dilemmas on our journey towards a brighter future. This book invites you to develop your critical thinking skills in relation to a number of themes in bioethics and law, including our duties to care for each other, for nonhuman animals, and for the nonhuman world. While the book engages with the law as a source of guidance and food for thought, unlike most publications in health care ethics and law, the emphasis is on the development of critical thinking skills in ethics. Each chapter ends with a list of questions that act as prompts in your own critical thinking journey.

The book is printed on climate-neutral paper. Emissions are offset by supporting a clean drinking water scheme in Zoba Maekel, Eritrea. It supports communities in renovating their boreholes so that people have access to clean water.

I provide the table of contents below, as well as a brief summary of each chapter.

Table of contents

Chapter 1: A short introduction to health care ethics and law

Chapter 2: Autonomy and its limits

Chapter 3: Duties of care, confidentiality, candour, and cost minimisation

Chapter 4: The creation and use of human embryos for human reproduction

Chapter 5: When is it acceptable to use non-human animals to promote human health?

Chapter 6: Research ethics

Chapter 7: Ethics in relation to pregnancy termination

Chapter 8: Is genetic engineering justified?

Chapter 9: Human embryo research in embryonic stem cell and cloning debates

Chapter 10: Ethical and legal issues related to the end of life

Concise summary (chapter-by-chapter)

Chapter 1: A short introduction to health care ethics and law

I argue that there is an urgent need to develop critical thinking skills in health care ethics and law, as the health care needs of a large number of organisms are in jeopardy, in spite of the fact that we have the capacities to address many of them. In order to do so, it is good to reflect upon one’s meta-ethical theory to determine what ethics is about. It is also important to reflect on how one’s values shape one’s principles and theories, and what ethical theory might be best to adopt. While much health care ethics theorising focuses on abstract/formal ethical theories that are applied insufficiently to reality, I argue that it is much more important to reflect upon different axiologies (theories of which concrete things/entities should be valued, and what value each has).

I argue for a theory that includes a deontological (duty-based) and a consequentialist element: the duty to promote positive consequences for one’s own health. This is not accompanied by an individualistic axiology. Rather, this theory is compatible with an axiology that ascribes intrinsic value to all entities. A crucial question here is what the intrinsic values of different things are, and how much value one should give to one entity relative to the value of another entity. Our axiologies are influenced by our reflections on what different entities are, which is the subject of ontology (theory of reality).

I outline two dominant ontologies, mechanistic materialism and dualism. I identify problems with both and sketch an alternative ontology, ‘panexperientialism’, that might both inspire and be inspired by a different outlook on what matters.

The practice of health care ethics is not only shaped by ethics, but also by different health care professions and by the law. This is why health care professionals and patients must take heed of relevant professional guidance and law, while avoiding legalistic approaches to health care.

The chapter concludes by providing some practical tools that can be used in ethical reasoning, including the use of logic, analogies, and thought experiments. These tools are applied to different areas of health care ethics in the ensuing chapters.

Questions raised by this chapter:

1. What are the different meta-ethical theories that have been described in this chapter and why might meta-ethical reflection be important?

2. What is your theory of health care ethics?

3. What does it mean to ascribe intrinsic value, which entities should be valued intrinsically, and how would you weigh up different entities’ values?

4. What ontology do you adopt and how might this inform your ethical theory?

5. What is the relevance of professional guidance and law for health care ethics?

6. Do you agree with the view that logic is important in health care ethics? Justify your answer.

7. Could you provide an example of how an analogy or a thought experiment might be helpful in health care ethics?

8. Why might legalism be a problem?

9. What is your view on the (ir)relevance of slippery slope arguments?

10. ‘Plants are sentient beings. Therefore, plants should be valued intrinsically.’ Do you think that this argument is logically valid?

Chapter 2: Autonomy and its limits

I argue that the concept of autonomy is relevant in health care and that health care professionals should reflect critically on what the law demands from them when human patients are unable to consent due to a lack of autonomy. I also argue that the need to balance the values of autonomy and beneficence can present great difficulties when health care professionals consider the health care interests of children, including their interests in safeguarding. The chapter ends with a discussion of the value of liberty and how it may need to be limited for health reasons in some situations.

Questions raised by this chapter:

1. What should health care professionals do in order to make sure that patients consent?

2. What should health care professionals do in situations where patients lack capacity?

3. Why might it be appropriate for health care professionals to consider advance refusals from patients who lack capacity?

4. In what circumstances would you condone restricting someone’s liberty for health reasons?

5. Do you agree with the view that there are some aspects of care that patients should not be allowed to refuse?

6. How should health care professionals decide whether or not to provide health care treatment to a child?

7. What counts as child abuse?

8. What should health care professionals do when they think that continued treatment of an infant is not in the infant’s best interests and when the parents insist on its continuation?

9. Do you agree with the view that a competent child’s views on medical treatment should be allowed to be overridden?

10. How should a health care professional handle a situation where they discover that a child has been subjected to female genital mutilation?

Chapter 3: Duties of care, confidentiality, candour, and cost minimisation

I discuss the duties of care, confidentiality, candour, and cost minimisation. As health care professionals can fail in these duties intentionally or through being reckless, careful attention must be paid to how these duties can be fulfilled and to how some of these might need to be balanced with other moral considerations.

Questions raised by this chapter:

1. How can health care professionals ensure that they act in accordance with their duties of care?

2. What should be demonstrated to determine whether a health care professional has breached their duty of care?

3. What should health care professionals do to safeguard patients’ right to confidentiality?

4. In what situations might it be appropriate for health care professionals to divulge confidential patient information to third parties?

5. What should a health care professional do if the police ask for information about a patient to investigate a potential offence that took place on a road?

6. How can health care professionals ensure that they act in accordance with their duty of candour?

7. When might it be appropriate to mislead patients?

8. What might be the benefits and disadvantages of using the notion of QALY in decisions about how to allocate funding for different treatments?

9. How would you decide between offering a lung transplant to a 75-year-old person who recently stopped smoking and a 25-year-old person who has never smoked when both are clinically equally suitable for transplantation?

10. Which criteria would you use to discriminate between patients who may need intensive care due to infection with a coronavirus when not all patients can receive treatment on the intensive care unit?

Chapter 4: The creation and use of human embryos for human reproduction

I provide an overview of the views adopted in the Warnock Report and in UK law on the use of embryos for reproductive purposes. I show that the arguments underpinning this framework do not provide a firm foundation for legislation. I recognise that, while it is one thing to undermine a range of arguments that have been used to deny high moral status to the young embryo, it is another matter to make a convincing case for why the young embryo should be granted such status. It is important to recognise that people who debate human embryo research often portray the young embryo as if he or she were an abstract, alien entity, the product of those who experiment with substances in test tubes in laboratories. The moral position that young embryos lack high status might be favoured by this mode of representation. At the same time, however, some modern technologies, for example, ultrasound sonography, allow us to represent embryos and foetuses in more concrete ways than has been possible until recently. This might perhaps make it more likely for some to be able to empathise with them, and prompt them to assign a higher status to them than they might have done otherwise. My view is that we should grant equal moral significance to all human beings. I am uncomfortable with the idea that we should value some human beings more than others. I also argue that health care professionals and patients should consider a number of other issues related to fertility treatments, including the use of PGD, sex selection, the creation of ‘saviour siblings’, mitochondrial donation, and issues related to whom should be able to access (information about) such treatments.

Questions raised by this chapter:

1. What are the main issues associated with the creation and use of human embryos for human reproduction?

2. What is the UK legal framework on embryo research, what are its ethical underpinnings, and how has it influenced other jurisdictions?

3. What is the position on embryo research developed by the Committee of Inquiry into Human Fertilisation and Embryology?

4. How has the Committee of Inquiry into Human Fertilisation and Embryology influenced different laws on embryo research?

5. What is the argument from sentience? Is it valid?

6. What is the argument from individuality? Is it valid?

7. What is the argument from twinning? Is it valid?

8. What are the key issues associated with pre-implantation genetic diagnosis?

9. When, if ever, should pre-implantation genetic diagnosis be acceptable to diagnose disability?

10. When, if ever, should pre-implantation genetic diagnosis be acceptable to diagnose the sex of an embryo?

11. When, if ever, should pre-implantation genetic diagnosis be acceptable to diagnose whether an embryo is a suitable tissue match?

12. When, if ever, should mitochondrial donation be allowed?

13. What should be the conditions for someone to be allowed to receive fertility treatment?

14. What should be the conditions for someone to be allowed to donate gametes?

15. When, if ever, should those who are conceived with donated gametes have access to genetic information about their donors, and what information should they be allowed to access?

Chapter 5: When is it acceptable to use non-human animals to promote human health?

In this chapter I grapple with the question of when it might be acceptable to use non-human animals to promote human health. I start with the observation that people use non-human animals in various ways to promote human health, and explore two common ways in which they are used: their use in research and their use for human nutrition.

With regard to the research usage, I sketch some laws that legislate the use of non-human animals, highlighting in particular that widespread support for the principle of necessity and the 3Rs questions many projects that use non-human animals, given that such animals are poor models for human beings. In addition, I engage with the question whether non-human animals should be used to model human health and illness, even if they might be good models, where I argue that an account of the moral standing of different non-human animals must be based on evolutionism. In this light, it would be particularly problematic to use non-human animals for research that does not benefit them where the animals are closely related to us.

With regard to the human use of non-human animals for food, I argue that, if the underlying reasoning is applied consistently across different domains, EU legislation on the use of non-human animals for research would lend significant support for significant change in laws on the use of non-human animals for food, resulting in a drastic curtailment in the human consumption of animal products. I sketch the moral arguments underpinning qualified moral veganism, which is defended against some challenges. The chapter also considers the ethical issues related to radically novel ways in which animal products could be produced, including the development of lab-grown meat.

Questions raised by this chapter:

1. What might be the reasons behind the fact that most books on health care ethics and law do not consider the use of non-human animals to promote human health? Why do you (dis)agree with them?

2. What moral theory do you advocate in relation to the human use of non-human animals?

3. What are the key issues to consider when human beings use non-human animals for research?

4. What is the relevant law on the use of non-human animals for research, and what legal change do you advocate, if any?

5. What do the positions of Singer, Regan, and Midgley entail for the use of non-human animals for research?

6. What would the EU law on the use of non-human animals for research imply for the human use of non-human animals for food, if the law in relation to the latter was made consistent with the law in relation to the former?

7. What moral reasons might someone adopt in support of carnism and in support of qualified veganism?

8. What arguments could be used to support or undermine the use of non-human animals for human nutrition?

9. How would you evaluate the morality of technologies that aim to produce lab-grown meat?

10. What useful functions, if any, might be fulfilled by committees that evaluate particular projects to use non-human animals? Justify your answer.

Chapter 6: Research ethics

In this chapter I engage with generic issues that apply to research projects, as well as with more specific issues that pertain to research that is carried out in clinical health care contexts. I identify the benefits and disadvantages of different types of clinical studies and discuss whether clinical trials should only take place when there is clinical equipoise. A failure to conduct RCTs in particular may be unethical and may result in a stagnation of ideas, a misplaced trust in unsystematised clinical experience, little development in available treatments, and a waste of resources. I also discuss the relevance of complementary therapies, question the use of alternative treatments, set out why research ethics committees play a valuable role in health care research, and how those who sit on such committees might go about evaluating research projects.

Questions raised by this chapter:

1. Why is consent important in relation to research?

2. Do you think research should ever be allowed without consent from participants? Justify your answer.

3. What do you think about the view that any research should be allowed, as long as participants consent?

4. What are the key ethical features of the relevant laws in relation to health care research?

5. Why should many RCTs never take place? Justify your answer.

6. What safeguards should there be to make sure that RCTs do not expose participants to disproportionate risks?

7. Should people ever be incentivised to participate in research studies?

8. What do you explain to potential participants when you want to recruit them to your study?

9. Should children be allowed to participate in research? Justify your answer.

10. What is your view about the opinion that health care trials should only be allowed if there is clinical equipoise?

Chapter 7: Ethics in relation to pregnancy termination

I propose how abortion legislation in the United Kingdom should be modified if it was informed by the view that all unborn human beings should be granted a right to life that should be allowed to be trumped in a limited number of situations. I argue that the current distinctions in the legal provisions for ‘able’ and ‘disabled’ foetuses as well as for ‘implanted’ and ‘unimplanted’ embryos cannot be maintained, and that greater protection of all human life must be enshrined into law. I also argue that there should only be a limited right to conscientious objection to participate in the provision of abortion services. There should be no right to object conscientiously to providing abortion services when there is a great risk that a pregnant woman’s life might be lost should the pregnancy be continued, and no right to refuse pregnancy counselling and referral of those who satisfy any of the revised legal grounds.

I recognise that, whether abortion law is altered in line with this proposal both depends and should depend on whether a valid democratic process is instigated towards legal reform. It is my hope that, if abortion legislation were amended in accordance with this proposal, health care professionals would provide those services that women should be entitled to, give serious consideration to facilitating or providing abortions that should be allowed, and reject those that should be prohibited.

Questions raised by this chapter:

1. What are the salient points of the law on abortion in the different jurisdictions of the United Kingdom?

2. What should a health care professional consider when a patient requests an abortion?

3. Why might abortion pose a moral problem for health care professionals?

4. What do you think should be the legal boundaries regarding the right to conscientious objection related to abortion?

5. Should abortion be allowed without any restrictions? Justify your answer.

6. What shape should the law on abortion have? Justify your answer.

7. What is your position on the legality of using medicines that might be abortifacient?

8. If one adopts human egalitarianism, would it imply that abortion should never be allowed? Justify your answer.

9. Do you think men should have any say in relation to whether or not an abortion should be allowed? Justify your answer.

10. Should everyone who wants it have free access to IVF treatments? Justify your answer.

Chapter 8: Is genetic engineering justified?

In this chapter I discuss ethical issues related to genetic engineering. While there is no doubt that genetics has advanced our understanding about health and illness a great deal, technologies that use the science of genetics can both promote as well as undermine health. Physical health can be improved and undermined, both directly and indirectly, through genetic engineering. The same applies to mental health. With regard to the mental health impacts of genetic engineering, a significant concern that has received relatively little attention in the literature is the concern that we ought to avoid creating unnatural things, and that genetic engineering is unnatural.

Although nothing is unnatural in the sense that everything is part of nature, I argue that the widely used distinction between the natural and the unnatural is nevertheless not meaningless. A semantic distinction between the natural and the unnatural can be drawn whereby the latter pertains to that which is affected by human culture and the former to everything else. More importantly, I argue that the fact that human culture pervades many natural events does not eliminate the distinction, but that it is appropriate to situate the natural and the unnatural at opposite ends of a spectrum. Where an entity is situated along this spectrum depends on the likelihood with which its specific essence might have come about counterfactually, which in this case means naturally. I distinguish between three gradations of unnaturalness, in spite of this continuity.

This distinction between the natural and the unnatural has moral relevance. While we must adopt a prima facie duty to safeguard the integrity of nature, the integrity of nature should not be protected at all costs. Doing so would stifle all human activity. In order to flourish, Homo faber must alter nature. However, an action that alters a natural entity’s teleology more significantly is, ceteris paribus, more problematic compared to another action.

This discussion is highly relevant to evaluate genetic engineering. As genetic engineering projects normally involve type 1 instances of the unnatural, they are morally suspect. In spite of this, the example of Huntington’s disease shows that this does not imply that genetic engineering is necessarily wrong. However, if a type 2 or type 3 intervention existed that could enhance the quality of life of the person in question equally effectively, we ought to prefer it.

Questions raised by this chapter:

1. Why might the question of what is natural be relevant for a discussion of genetic engineering?

2. Do you agree with the view that there are gradations of artificiality? Justify your answer.

3. How might differences in degrees of naturalness be morally relevant?

4. How might genetic engineering be used to benefit human health?

5. How might genetic engineering undermine human health?

6. Do you approve of the creation of Herman the bull?

7. Would you approve of using genetic engineering on a human embryo to correct the gene that predisposes for Huntington’s disease, if such were possible?

8. What do you think of the view that there is nothing new in genetic engineering as nature has engineered itself for a very long time?

9. What do you think of genetic engineering projects that aim at making some non-human animals better models to study human disease?

10. Would you eat genetically engineered plants or animals? Justify your answer.

Chapter 9: Human embryo research in embryonic stem cell and cloning debates

In this chapter, I provide an overview of the views that have been expressed by advisory bodies and members of Westminster Parliament in support of legal developments to allow research on young human embryos in the United Kingdom. While UK law has inspired similar legal reform in many other countries, this chapter shows that the arguments underpinning this framework do not provide a sound basis for the current legal position. My view on the status of the young human embryo is at odds with the views underpinning this framework. Rather than denying the embryo high moral status, I adopt the view that we should consider all human beings to be equal, rather than make the question of what value should be assigned to a human being dependent on how many properties, capacities, or experiences a human being might possess.

Questions raised by this chapter:

1. How would you sum up the moral reasoning underpinning the Human Fertilisation (Research Purposes) Regulations 2001, and what do you make of the arguments that were developed to support these?

2. What are the two arguments from potentiality in relation to the status of the young human embryo and do you think that these arguments are sound?

3. What is the argument from capacities in relation to the status of the young human embryo and why do you (dis)agree with this argument?

4. What is the argument from probability in relation to the status of the young human embryo and why do you (dis)agree with this argument?

5. What is the argument from mourning in relation to the status of the young human embryo and why do you (dis)agree with this argument?

6. What is the argument from ensoulment in relation to the status of the young human embryo and why do you (dis)agree with this argument?

7. What policy would you like to adopt in relation to human embryo research? Justify your answer.

8. Would you favour altering the law on human embryo research so that human embryos can be used for research when they are older than 14 days? Justify your answer.

9. What is the relevance of the scientific advances that have been developed on the basis of human embryo research for the ethics of embryo research?

10. Do you agree with laws that allow the creation of human admixed or hybrid embryos? Justify your answer.

Chapter 10: Ethical and legal issues related to the end of life

In this chapter I consider when treatment might be futile, whether it may ever be appropriate to withhold or to withdraw treatment from a patient, whether pain relief that might hasten one’s death should be taken or provided, whether physician-assisted suicide and euthanasia should be legal options, and how health care professionals might cater for the spiritual needs of patients. These issues are difficult and emotionally challenging. In a culture where ageism is challenged and where speaking about death and the dying process might be more widely accepted, there is a good chance that people may feel better able to cope with the prospect of dying and with making decisions that promote well-being when it is hard to do so.

Questions raised by this chapter:

1. How might health care professionals go about determining whether or not a treatment is futile?

2. How might a health care professional justify withdrawing treatment from a patient?

3. Do you agree with the withdrawal of treatments for patients who are in a persistent vegetative state? How might you try to justify your answer?

4. Do you think that there are aspects of care that should never be withheld or withdrawn from patient, and if so, which aspects? How would you justify this?

5. What do you think of the view that English law on assisting suicide discriminates against disabled people?

6. Do you think assisting suicide should be allowed? Justify your answer.

7. Do you think euthanasia should be allowed? Justify your answer.

8. If assisting suicide were allowed, what do you think should be the conditions?

9. If euthanasia were allowed, what do you think should be the conditions?

10. Do you think there may be situations where those who aid in the suicide of a patient should (not) be prosecuted?

11. How might the doctrine of double effect be applied to the provision of pain relief to a dying patient?

12. Do you agree with the view that withdrawing artificial hydration and nutrition from a terminally ill patient should always be accompanied by terminal sedation?

13. What should health care professionals do when the parents of competent children demand life-saving treatment that the child refuses?

14. What do you think of the view that good palliative care is always preferable to treating the patient in order to end their life?

15. How might health care professionals optimally look after the spiritual needs of patients who adopt Christianity, Islam, Hinduism, Sikhism, Judaism, or Buddhism?

What do others think?

Julia Hynes, Kent and Medway Medical School: “In my view this is a book which will prove to be of benefit to healthcare students nationally, and even internationally, as it teaches the student how to think in a critical fashion. It may be used as a core text or one of two core texts. With prescribed reading set and with tutorial facilitation, it will enable the student to create and analyse philosophical arguments through the question prompts at the end of each chapter.”

Artificial intelligence, health care, and ethics. A conference at Newcastle University

Location: Old Library Building, room 2.22

Time: 8 September 2023

Funded by:



11-11.30Stephen McGough and Jan DeckersIntroduction
11.30-12.00Adam Brandt & Spencer HazelGhosting the shell: exorcising the AI persona from voice user interfaces
12.00-12.30Emily PostanGood categories & uncanny kinds: ethical implications of health categories generated by machine learning
12.30-13.30Lunch breakLunch break
13.30-14.00Emma GordonMoral expertise and Socratic AI
14.00-14.30Shauna ConcannonLiving well with machines: critical perspectives on communicative AI
14.30-15.00Angus RobsonMoral communities in healthcare and the disruptive potential of advanced automation
15.00-15.30Andrew McStayAutomating Empathy and the Public Good: The Unique Case of Health
15.30-16.00Koji TachibanaVirtue and humanity in an age of symbiosis with AI
16.00-16.30Jamie WebbTrust, machine learning, and healthcare resource allocation
16.30-17.00Jan Deckersclosure
18.00 onwardsSocial dinner for presenters 


Stephen McGough and Jan Deckers: Introduction

As a scholar researching the areas of machine learning, big data, and energy efficient computing, Stephen will provide a sketch of what AI actually is. Jan will introduce the day, looking back at the presentations at the first event at Chiba University, and looking ahead at the presentations that will follow today.


Adam Brandt & Spencer Hazel: Ghosting the shell: exorcising the AI persona from voice user interfaces

Voice-based Conversational User Interfaces (CUIs or VUIs) are becoming increasingly ubiquitous in our everyday interactions with service providers and other organisations. To provide a more naturalistic user experience, Conversation Designers often seek to develop in their conversational agents what has been glossed as humanlikeness, namely design features that serve to anthropomorphise the machine. Guidelines suggest for example that designers invest time in developing a persona for the conversational agent, selecting an appropriate persona for the target user, adapting conversational style to the user (for discussion, Murad et al., 2019). Such anthropomorphic work may mitigate somewhat for the challenges of requiring users to behave as if they are speaking with a human rather than at a machine. However, in attempting to trigger a suspension the disbelief on the part of the user, allowing them to entertain the idea that what they are speaking at has some kind of sentience, raises a number of ethical concerns, especially where users may end up divulging more than they would like.

This presentation reports on work carried out by the authors in partnership with healthcare start-up Ufonia. The company has developed a voice assistant, Dora, for carrying out routine clinical conversations across a number of hospital trusts (see Brandt et al. 2023). Using insights and methods from Conversation Analysis, the collaboration has worked to ensure that these conversation-framed user activities achieve the institutional aims of the phone-call, while providing as the same time an unchallenging experience for the user. Conscious of the ethical dimension to which this gives rise, we discuss here how we help to pare down the importance of agent persona (managing the risk of users treating the agent as a person), while curating a more natural user experience by modelling the speech synthesis on the normative patterns of everyday (institutional) talk.


Brandt, A., Hazel, S., Mckinnon, R., Sideridou, K., Tindale, J. & Ventoura, N. (2023) ‘From Writing Dialogue to Designing Conversation: Considering the potential of Conversation Analysis for Voice User Interfaces’, in [Online]. 2023

Murad, C., Munteanu, C., Cowan, B.R. & Clark, L. (2019) Revolution or Evolution? Speech Interaction and HCI Design Guidelines. IEEE Pervasive Computing. 18 (2), 33–45.


Emily Postan, Good categories & uncanny kinds: ethical implications of health categories generated by machine learning

One area in which there is particular interest is health applications of machine learning (ML) is in assisting not only in the detection of disease or disease risk factors, but also by generating new or refined diagnostic, prognostic, risk, and treatment response categories. Deep learning (DL), a powerful subset of ML, potentially offers marked opportunities in this area. This paper interrogates the ways in which uses of DL in this way may result not only in the classification of images, risks, or diagnoses, but also in reconfigured and novel ways of categorising people. It asks why categorisation by AI – and deep learning algorithms in particular – might have ethically significant consequences for the people thus categorised. The paper approaches these questions through the lens of philosophical treatments of ‘human kinds’. It asks to what extent AI-generated categorisations function, or could come to function, as human kinds. More specifically, it seeks to characterise how human kinds predicated on machine learning algorithms might differ, and differ in ethically significant ways, from existing human kinds that come about through social and historical processes and practices. It explores the potential impacts of AI-generated kinds on members’ experiences of inhabiting and exercising agency over their own categorisations. As such this paper pursues a line of ethical inquiry that is distinct from, though complementary to, more familiar concerns about the risks of error and discrimination arising from AI-enabled decision-making in healthcare. The paper concludes that while the impacts of machine learning-generated person-categorisations may not be unequivocally negative, their potential to alter our identity practices and group memberships needs to be weighed against the assumed health dividends and accounted for in the development and regulation of trustworthy AI applications in healthcare.


Emma Gordon: Moral Expertise and Socratic AI

A central research question in social epistemology concerns the nature of expertise and the related question of how expertise in various domains (epistemic, moral, etc.) is to be identified (e.g., Goldman 2001; Quast 2018; Stichter 2015; Goldberg 2009). Entirely apart from this debate, recent research in bioethics considers whether and to what extent cognitive scaffolding via the use of artificial intelligence might be a viable non-pharmaceutical form of moral enhancement (e.g., Lara and Deckers 2020; Lara 2021; Gordon 2022; Rodríguez-López and Rueda 2023). A particularly promising version of this strategy takes the form of ‘Socratic AI’ — viz., an ‘AI assistant’ that engages in Socratic dialogue with users to assist in ethical reasoning non-prescriptively. My aim will be to connect these disparate strands of work in order to investigate whether Socratic-AI assisted moral enhancement is compatible with manifesting genuine moral expertise, and how the capacity of Socratic AI to improve moral reasoning might influence our criteria for identifying moral experts.


Shauna Concannon: Living well with machines: critical perspectives on communicative AI

Artificial Intelligence (AI) is having an ever-greater impact on how we communicate and interact. Over the last few years, smart speakers, virtual personal assistants and other forms of ‘communicative AI’ have become increasingly popular and chatbots designed to support wellbeing and perform therapeutic functions are already available and widely used. In the context of health and social care, attention has begun to focus on whether an AI system can perform caring duties or offer companionship. As machines are positioned in increasing relational roles, what are the societal and ethical implications and should these interactions be modelled on human-human interaction?


In this talk, I will review recent developments in communicative AI, ranging from empathetic chatbots to storytelling applications tailored for children. Through this exploration, I will examine key risks and the potential for harm that these innovations may entail, and consider the implications that arise due to the ontological differences between human-machine and human-human communication.  Finally, I will consider what is required to guide more intentional design and evaluation of these systems, with a more central focus on interaction, moral care and social conduct.


Angus Robson: Moral communities in healthcare and the disruptive potential of advanced automation

The importance of healthcare organizations as moral communities is increasingly supported in recent research. Advanced automation has potential to impact such communities both positively and negatively. This study looks at one specific aspect of such potential impacts, which is here termed second-person displacement. Using Stephen Darwall’s idea of second-person standpoint, interpersonal events are proposed as a basic condition of the moral community which is threatened by automation in certain conditions of second-person displacement. This threat occurs in two particular respects, pervasiveness and humanness. If these two kinds of threat are understood and acknowledged, they can be mitigated by good design and planning. Some principles are suggested to assist strategies for responsible management of increasingly advanced automation, including protection of critical contact, building the resilience of the moral community and resisting deception.


Andrew McStay: Automating Empathy and the Public Good: The Unique Case of Health

Once off-limits, the boundaries of personal space and the body are being probed by emotional AI and technologies that simulate properties of empathy. This is occurring in worn, domestic, quasi-private and public capacities. The significance is environmental, in that overt and ambient awareness of intimate dimension of human life raises questions about human-system interaction, privacy, security, civic life, influence, regulation, and moral limits regarding the datafication of intimate dimensions of human life. This talk will discuss the historical context to these technologies, current technological development, existing and emergent use cases, whether remote profiling of emotion and datafied subjectivity is acceptable, and (if so) on what terms. Problematic in method and application, especially commercial uses, the use of automated empathy in healthcare is observed to raise unique questions. Tabling some of the legal and moral concerns, the talk will also flag citizen opinion obtained by the Emotional AI Lab (that McStay directs) on use of automated empathy and emotional AI in healthcare. This will be argued to be instructive for debate on the moral limits of automating empathy.


Koji Tachibana: Virtue and humanity in an age of symbiosis with AI

The social implementation of AI will accelerate in the near future. In such a society, we humans will communicate, work and live together with AI. What is human virtue or humanity in such a symbiotic way of living with AI? Examining several discussions of possible social implementations, I consider this question and argue that symbiotic life with AI is an excellent opportunity to understand humanity because we can perceive an absolute difference between humans and AI.


Jamie Webb: Trust, machine learning, and healthcare resource allocation

One use of machine learning in healthcare is the generation of clinical predictions which may be used in healthcare resource allocation decisions. For example, a machine learning algorithm used to predict post-transplant benefit could be involved in decisions regarding who is prioritised for transplant. When conducting interviews with patients with lived experience of high stakes algorithmic resource allocation, patients have occasionally expressed to me that they trusted how prioritisation was determined because they trusted the clinical staff involved in their care. This trust may be grounded in experience of compassionate and competent care. However, these demonstrations of trustworthiness may have nothing to do with the way algorithmic resource allocation decisions are made. Is this a problem? This presentation will explore this question with the use of philosophical theories of trust and trustworthiness, considering the particular challenges machine learning might bring to patient trust and the clinical staff-patient relationship.

Conference: Artificial Intelligence and Ethics: Medicine, Education, and Human Virtues

A conference on 26 May 2023 at Nishi-Chiba Campus, Chiba University, Japan

13:00-13:10 Opening Remarks
Koji Tachibana, PhD (Faculty of Humanities, Chiba University, Japan)

13:10-13:50 “Ethical challenges and opportunities related to the use of AI in health care”
Jan Deckers, PhD (Faculty of Medical Sciences, the University of Newcastle, UK)

Abstract: Health care decision-making is flawed as health care professionals and patients do not always know all the facts that are clinically relevant, may not be able to interpret facts, and may not be able to evaluate their moral significance. Whilst AI systems may help with health care decision-making by gathering more relevant facts and by helping with interpreting and evaluating data, health (care) may also be undermined by AI. This presentation sketches some significant hurdles that must be overcome to ensure that AI systems promote rather than undermine health (care). These hurdles include rational and emotional ontological confusion about the nature of AI, technological deficiencies, and problems related to how AI systems are being used.

13:50-14:30 “Ethical Issues regarding the application of AI to the healthcare Settings”
Eisuke Nakazawa, PhD (Faculty of Medicine, the University of Tokyo, Japan)

Abstract: Implementation of artificial intelligence in psychiatric care will bring innovations that contribute to patient well-being by reducing the burden on physicians and other healthcare professionals and by improving the accuracies of diagnoses and risk predictions. On the other hand, from an ethical standpoint, AI development research needs to include efforts to review opt-out consent from the perspective of the right to control one’s own information, with dynamic consent in the scope to ensure the autonomy of research participants. Medical-technical communication in which consensus is formed in advance between research developers and health care providers and the public, including patients, is necessary, and this is prominently required for the issue of secondary, incidental findings. Issues such as the burden on research participants due to false positives, respect for the right of research participants not to be informed of their results, and the social risk of false positives converge with the issue of how to communicate secondary and incidental findings to research participants. It is not unreasonable to be cautious about returning secondary and incidental findings, especially when adequate communication cannot be ensured.

14:30-15:10 “Exploring the ethics of smart glasses: Navigating the future of wearable tech”
Semen Trygubenko, DPhil (Dodrotu Limited, UK)

Abstract: The purpose of this study is to provide an overview of the ethical issues related to the use of smart glasses in order to facilitate decision-making and the formation of knowledge and norms. We identify a wide range of ethical issues, including privacy, safety, justice, change in human agency, accountability, responsibility, social interaction, power, and ideology. The use of smart glasses is expected to impact individual human identity and behavior as well as social interaction, which must be taken into account when developing, deliberating, deciding on, implementing, and using smart glasses. We consider the issues that are applicable generally as well as those that arise in the context of remote-calling functionality available in Ziru AV smart glasses prototype.

15:10-15:30 Tea/Coffee Break

15:30-16:10 “Can ChatGPT serve as a clinical ethics consultant?”
Yasuhiro Kadooka, MD, PhD (Faculty of Life-Sciences, Kumamoto University, Japan)

Abstract: Generally, healthcare professionals should make a well-balanced value judgment by consulting with colleagues or specialists when confronted by ethically uncertain situations. Currently, some professionals may ask a conversational large language model AI easily. This descriptive research aimed to explore the performance of ChatGPT on clinical ethics consultation, which is an advisory service to support healthcare professionals and patients in identifying, analyzing and resolving ethical dilemmas/issues of daily care. Human clinical ethics consultants participated and asked ChatGPT for advices on an ethically appropriate action in a hypothetical vignette. All conversations between the consultants and ChatGPT were recorded and analyzed qualitatively. Tentatively, this study emphasizes that the conversational large language model AI can instruct general principles/norms of clinical ethics, but may fail to make a holistic assessment of individual patients. The analysis is still ongoing. Detailed findings will be presented at the session.

16:10-16:50 “Upgrading feelings”
Jasmin Della Guardia, MS (Graduate School of Humanities, Chiba University, Japan)

Abstract: Fiction tells us stories about how AI can improve humans by taking humans to the next level, e.g. with Human Brain Interfaces like in Neon Genesis Evangelion, Iron Man or the Borgs in Star Trek. Such fictions portray an exaggerated duality of the AI, making us either superhuman or evil juggernauts. However, fiction meets reality, because every day AI merges more and more with our lives as tools (AI filters and art) or as (autonomous) operators (in cars, space travel, and robots; e.g. space robot “CIMON” whose should cheer up astronauts). The fears and dangers are also real and force a debate because this technology is changing the way we think about us and also contains human errors. But we are already cyborgs and AI is human too, so we need to discuss social, psychological, and ethical consequences. To avoid dystopian developments, we need to discuss how enhancing physical ability, attractiveness, creativity, and psychological well-being with AI can make us better people. As an example, we want to examine the influence of AI filters and interactions with AI as a social other on psychological well-being, the philosophical image of man and self-perception.

16:50-17:30 “What can humans learn from AI about creativity as an intellectual virtue?”
Ryo Uehara, PhD (Faculty of Informatics, Kansai University, Japan)

Abstract: Creativity has long been an object of consideration in philosophy, especially intellectual creativity as one of the intellectual virtues in virtue epistemology. On the other hand, recent artificial intelligence has shown remarkable creative abilities. Artificial intelligence, being an artifact, cannot be considered to have virtue. Nevertheless, it is expected that humans can learn something about the exercise of creativity as an intellectual virtue from artificial intelligence. This presentation will organize the debate on the creativity of artificial intelligence and clarify the differences from the creativity that can be demonstrated by humans. It will then examine how artificial intelligence can be used to help humans cultivate creativity as an intellectual virtue.

17:30-17:40 Closing Remarks & Announcements

18:20- Social Dinner (T.B.A)

Sponsors: The Great Britain Sasakawa Foundation & JSPS KAKENHI (20H01178)

Organisers: Koji Tachibana and Jan Deckers

What ought to be done to curtail the negative impacts of animal product consumption on the climate crisis, even if COP26 failed to do so?

In the 2015 Paris Agreement, 196 countries pledged to limit global warming to below 2 degrees Celsius, and preferably to 1.5 degrees Celsius, compared to pre-industrial levels. To achieve the latter goal, countries’ total emissions would need to be reduced by some 45% in 2030 compared to levels in 2010. The COP 26 meeting at Glasgow provided an opportunity to develop a strategy to achieve this, but it failed to do so. Consequently, we are not on track to meeting the Paris Agreement goals.

Continue reading

What did participants at COP 26 think about the consumption of animal products?

The food system, holistically considered, accounts for around a quarter of all anthropogenic emissions and can contribute greatly to attempts at mitigation due to the great potential of better land management in storing carbon. Recognising the role of agriculture in tackling climate change, the COP 23 meeting, held in Fiji in 2017, decided on a Koronivia Joint Work on Agriculture. Despite this initiative, in Glasgow, relatively few discussions took place on the role of the food system in relation to climate change, and even fewer considered the important contribution played by the consumption of animal products. In my previous post I pointed out that many non-vegan diets compare poorly with vegan diets when we consider the climate change impacts of human dietary choices as they contribute disproportionately to the release of carbon dioxide, nitrous oxide, and methane, and squander opportunities for carbon sequestration. In this post I report on some discussions that took place at COP 26 in relation to the consumption of animal products.

Continue reading

COP 26 and the moral imperative of a dietary transition

In early November 2021, thousands of people came together in Glasgow, at the 2021 United Nations Climate Change Conference, more commonly known as COP 26, to develop work on the 2015 Paris Agreement. The central goal of this agreement was to avoid driving up temperatures by more than 1.5° C relative to the pre-industrial level. This means that average emissions, measured in carbon dioxide equivalents per person annually, should be no more than about 2 tonnes. As average emissions are currently more than twice that, we are a long way from that goal.

Continue reading