In 2024, I travelled to Joetsu University of Education, which is associated with an elementary school. In this school, I witnessed a class on moral education, delivered by the teacher Ippei Takahashi. It was a lively class that discussed the fate of a nest of great tit chicks who had been raised in a nestbox that the school had made and put up on the campus.
One night a predator had entered the campus and taken all the chicks out of the nestbox. This caused a great deal of sadness amongst the school children. Ippei Takahashi engaged with the children in a lively discussion about the moral issues raised by the fate of the chicks.
The following reflections (‘part2’) do not aim to replace Ippei’s class, but to complement what he taught. I hope that this may be of some use to those who help children to reflect on these as well as on others issues related to relationships with nonhuman animals. The English text was translated from the English into Japanese by 吉場温加 (Yoshiba Nodoka) from 同志社大学 (Doshisha University).
Moral education: the value of the lives of great tit chicks. Part 2.
道徳教育:シジュウカラのヒナの価値。パート2
Surely, it was a sad fact that the chicks in the nestbox built by the children had been eaten by a predator. For the predator, however, it may have been a joyous experience to eat the chicks. The question that, in my view, should form the core of a class on moral education, is what the children ought or ought not to do in the future. The children seemed to agree with this as their sadness associated with the loss of the chicks may have been caused, at least in part, by the question whether they might have been able to prevent their loss.
Sadly, these chicks can no longer be saved. Future chicks might be saved, and the question is whether the children ought to try to do so. I can see several possible options here:
Children should destroy the nestbox as great tits might be better off making nests in nature.
The nestbox should be kept in place as it is.
The nestbox should be redesigned so that a predator is less likely to be able to eat the chicks.
The nestbox should be redesigned so that a predator is more likely to be able to eat the chicks.
The children could explore all these options through class discussion. They might develop arguments for and against each of these four options, before coming to a conclusion. They might then explore the perspective of the teacher in relation to these options. The perspective that follows is that which was developed by Jan Deckers, who observed their first moral lesson about the value of life related to the loss of these chicks.
In relation to the first option, it must be recognised that great tits have been harmed by human activities as human beings have altered the landscape greatly, for example by building houses and roads. Some species may find it hard to adapt to living in a world that has been radically transformed by human beings. Some species may be able to cope better than others.
Surely, it is our duty to help those species who struggle through human encroachment on natural places, for the individuals who belong to such species themselves, for the benefits that other animals may gain from there being great tits, and for the value that human beings attach to species conservation. If great tits struggle through the human transformation of landscapes, destroying the nestbox would not seem to be a good option. It might lead to great tits struggling even more. This seems to amount to a ‘prima facie’ (provisional) case for the second option.
However, the third option must also be considered as it might be argued that, whilst great tits may be impacted less by the human construction of roads, houses, etc., the construction of nestboxes might harm them even more if such boxes are not constructed well. It is conceivable that predators, for example snakes or big birds, may be more likely to prey upon chicks who are born in nestboxes than on those who are born inside natural nests that are made entirely by the birds themselves. This must be carefully considered. Children may thus decide to consider various things that they might do to construct better nestboxes, for example making them deeper and reducing the size of the hole. They might also hang them in less conspicious places.
However, the final option must also be considered. Perhaps we should help the chicks’ predators even more than the chicks? We might do so by redesigning the nestbox so that they are even more likely to eat the chicks. This might seem like a very cruel option, but it is worthwhile for children to consider that people help the lives of predators in many other ways. For example, people who keep cats may feed the cats meat from animals who have been bred in order to be killed or from animals who have been killed for the purpose of feeding cats, for example fish who swim in the sea and who are caught to be turned into cat food. Similarly, people may breed mealworms to put on bird tables in their gardens, where the mealworms are bred in order to be killed to be put on bird tables.
I disapprove of these actions. Whilst it may be appropriate to give ‘obligate carnivores’ (animals who, by natural necessity, must eat other animals in order to survive) the chance to eat other animals, I do not think that it is appropriate to supply them with the meat from other animals or to make it easy for animals to be preyed upon by them in most circumstances. Exceptions to this rule may be situations where the obligate carnivore is from a species that is threatened with extinction or where the obligate carnivore could be fed with the meat from species whose experiences might be quite basic, for example mealworms or maggots. Arguably, their lives may be less valuable than the lives of animals who might be thought to have more complex experiences, for example animals who have (larger) brains.
Rather than make it easy for predators to find vulnerable chicks, to avoid unnecessary suffering, it may also be better to kill prey animals quickly in situations where the killing of an animal to feed a predator can be justified.
The cultivation of lab-grown meat might also be appropriate to try to help such species. These are animal cells that are extracted from animals whilst they carry on living and that are grown inside petri dishes. The benefit of this relatively new technology, which is still being developed, is that no animals would need to be killed to develop meat in this way.
To conclude, I do not think that it was wrong to build the nestbox as great tits might benefit from our help, given that we have encroached on their natural habitats. If we had not encroached on their natural habitats, it might have been better not to build these boxes, in order to avoid wild animals becoming dependent on our help for their survival. Not building the nestbox would also avoid the risk of the nestbox not being designed well to protect the birds.
If the children can be reasonably confident that they might be able to rebuild the nestbox so that it may be less likely for any future chicks to be preyed upon, there would be good reason to redesign it. If such confidence would be lacking, it might be better to leave the nestbox as it is, but to consider placing it in an area where any future chicks might be less likely to be preyed upon.
The children should also consider how their actions impact on the lives of other animals in other ways. Some children may eat other animals. Could the children be eating other things, for example vegetables and fruits, so that they may not need to consume animals? If they stopped eating animals as well as products derived from the bodies of animals, such as milk, cheese, and eggs, we would no longer need to harm animals, for example by killing them and by taking their milk and eggs, in order to feed ourselves.
Many people adopt vegan diets now. Whilst these diets are not common in Japan, in many other countries many people have adopted them. People are able to thrive on vegan diets, so the consumption of products derived from the bodies of animals is not necessary. Perhaps the time has come for veganism to become more mainstream in Japan as well? The students should also question other ways in which animals are treated in the school, for example the ways in which the alpacas are being handled and the ways in which the turtle is being kept (who may be living in a very unnatural environment).
This learning resource was developed by Jan Deckers with advice from Eisuke Nakazawa, who wrote the Japanese translation. The Great Britain Sasakawa Foundation provided the funding. This learning material was developed primarily to help students, particularly those in various health care disciplines, to think through some of the moral dilemmas associated with the use of artificial intelligence (AI) in health care. It may appeal also to many other students and scholars. You can work through this resource at your own pace. It may take you around 3 hrs to do this. The text is interspersed with moments of reflection where you might like to pause and think for a while, before moving on.
This learning resource has been developed to stimulate ethical discussion of how health care might be improved and undermined by the development of artificial intelligence (AI) and how AI systems ought to be designed and used to promote health care. While recognising that all biological organisms have interests in health, the focus here is primarily on human health care.
2. How to ensure that AI for health care is evidence-based? / ヘルスケア向けAIがエビデンスに基づいていることをどのように保証するか?
Those who design AI systems for health care should use evidence to develop such systems, but they may not be willing to do so. While disease prevention and health promotion should, arguably, be guiding the development of AI in health care, AI designers are not always motivated primarily by these goals due to the existence of conflicts of interests.
Image 1: CC BY-SA 3.0; Conflict Of Interest by Nick Youngson; Alpha Stock Images; https://www.picpedia.org/keyboard/c/conflict-of-interest.html
Companies that design AI systems for health care know that they should take some interest in health as their systems would be highly unlikely to be used if they undermined health. However, they might be designed to promote the health of some at the cost of that of others. For example, by being designed in such ways that users are likely to buy particular products, the financial interests of (sponsors of) AI companies might be advanced significantly. These financial benefits might promote them, at the expense of many others (Strickland 2019).
Even where conflicts of interests are eliminated, what we class as ‘evidence’ will inevitably be biased by the particular conceptions of truth that we all have. An AI designer might, for example, be biased either consciously or unconsciously by the assumption that our understanding of medicine should be based largely on scientific data derived from the study of male bodies, thereby ignoring the possibility that this science may not apply to the bodies of others. The upshot might be an AI system that performs well for males, but less well for others. Such gender bias is common both in medicine and in AI systems.
The important thing is to separate unjustifiable from justifiable bias, as well as to avoid discarding sources of bias from cultural traditions that are different from one’s own without serious critical scrutiny. Sometimes, unjustified bias may not be able to be eliminated without causing a greater injustice. In such situations, one must try to recognise (the possibility of) bias, and try to ensure that the injustice is the lesser evil compared to another injustice (Loughlin 2008).
Image 2: CC BY-SA 3.0; Evidence based by Nick Youngson; Pix4free; https://pix4free.org/photo/31575/evidence-based.html
Reflection:
How would you separate evidence from falsehood?
考察の時間
エビデンスと誤情報をどのように区別しますか?
An AI system may recommend that a London bus should be red as it might be ‘trained’ on images of London buses being red. Similarly, it might recommend that Wenxin Keli should be used to treat patients with atrial fibrillation from data that shows that many patients with atrial fibrillation take Wenxin Keli. However, it is important to separate correlation from causation. The fact that two things are frequently correlated does not imply that one thing causes another. To count as evidence in health care, one must establish that taking a particular substance or receiving a particular treatment causes a particular outcome (Deckers 2023: chapter 6, particularly 6.4-6.6).
3. How to navigate the black box of AI? / AIのブラックボックス問題にどう対処するか?
Even where AI systems may be reliable, they may not be transparent. Some talk about the black box of AI to refer to this lack of transparency (Von Eschenbach 2021). Some AI systems are deliberately designed in that way as the companies that develop them may be keen to protect their intellectual property. They may not like to give away their trade secrets. Although this is understandable, health care workers are challenged with tricky dilemmas here: do they trust AI systems where it is not clear how their outputs (e.g. treatment recommendations) are produced?
Sometimes, this lack of explainability is not due to the company wanting to hide how systems works, but due to an inherent feature of some systems, for example those that rely on ‘machine learning’. Here, AI systems are fed with huge amounts of data, and the machine ‘learns’ to find patterns in the data that may help health care workers to identify diseases or to make treatment decisions. In the UK, a TBS (‘transplant benefit score’) model is currently being used to help health care workers to make decisions about which patients to allocate organs to (Lee et al. 2023). A new AI system has been shown to be able to do a better job, at least if we class ‘better’ here in terms of organs lasting longer. However, the problem is that those who designed it do not understand how it manages to do this (Wingfield et al. 2020). A patient might ask here why they did not deserve priority over another patient in transplant decisions.
一部のAIシステムにおける説明不可能性は、企業が仕組みを隠そうとしているのではなく、「機械学習」などのシステム固有の特徴によるものです。この場合、AIシステムには膨大なデータが投入され、機械がそのデータのパターンを「学習」することで、疾病の特定や治療方針の決定に役立てます。たとえば、イギリスでは現在、TBS(臓器移植の利益スコア)モデルが、臓器をどの患者に割り当てるかを決定するために使用されています(Lee et al., 2023)。新しいAIシステムは、少なくとも臓器の生着持続期間を基準とする場合、これより優れた結果を出せることが示されています。しかし、問題は、設計者自身がそのAIシステムがどのようにその結果を出しているのか理解していない点です(Wingfield et al., 2020)。このような状況で、移植の優先順位がなぜ他の患者よりも低いのかを尋ねられた場合、患者にどのように説明するのでしょうか?
Reflection:
Reflect for a moment on what a health care worker might tell the patient in this situation?
考察の時間
このような状況で医療従事者が患者に何を伝えるべきか考えてみてください。
The health care worker would need to reply that they do not understand the output either. Rather than health care workers relying blindly on AI systems, research (e.g. randomised controlled trials) might be carried out to check whether a particular AI system might make better decisions compared to other ways in which decisions could be made. Where such trials have been carried out, a health care worker might say to a patient that they do not understand the reason behind the decision, but that a study had shown that the machine performed better than a team of people deciding without the help of AI. They would also need to explain to the patient what criterion or criteria had been used to define ‘better’. It might mean, for example, either that organs last longer or that those who receive the transplanted organs suffer fewer side-effects.
4. How to use AI data responsibly? / AIデータをどのように責任を持って使用するか?
By using AI systems, health care workers are also able to gather a lot more patient data than they used to be able to, to analyse it better, and to store it more easily for a long period of time. The handling of AI data comes with significant moral risks. These include inadvertent data loss, deliberate sharing of data, and the inappropriate use of data for research. Even if confidentiality is an important value, it may need to be traded off with other values, for example beneficence (well-being). The classical model of consent is that patients should provide consent for specific therapeutic goals or research projects. This might be questioned as many ascribe value to the fact that AI systems can pool data gathered from multiple sources, store it with greater ease, and for a much longer time. Data that may not be useful today may become useful in the future. The usage of data for as yet unspecified research projects (a long time) in the future would be problematic if we hold on to the idea that patients should be allowed to consent only for specified research projects.
Imagine that an AI system helps you to find out that a particular patient is at greater risk of developing cancer compared to the average patient, for example through the AI system analysing particular images of the patient’s tissues and finding that some are pre-cancerous. The patient does not know about this.
Image 4: CC0 1.0; cancer cells as viewed under a microscope; rawpixel.com
Reflection:
Reflect for a moment on what you might do with this newly acquired information.
考察の時間
この新たに得られた情報をどのように扱うべきか考えてみてください。
This finding might be highly relevant for the patient if the patient could be screened for cancer more regularly because of this finding, and if early detection of cancer might help in the treatment. This is not always guaranteed. For example, thanks to recent technological breakthroughs in AI, more patients can be diagnosed with thyroid cancer. However, in spite of this, prognoses for patients suffering from thyroid cancer have not improved significantly (Ho et al. 2015; van Deen et al. 2023). Moreover, some cancers can resolve without any treatment, a phenomenon known as spontaneous remission (Radha and Lopus 2021). While scientists think that this is rare, if technology to diagnose cancer is improved, more cancers that might resolve without any treatment may be identified. Before health care workers decide to share information with patients, it is therefore important to assess whether it might be better for the patient not to receive information, particularly when it might make them anxious.
この発見は、患者にとって非常に重要である可能性があります。たとえば、この情報をもとに患者が定期的にがんスクリーニングを受けることで、早期発見が治療に役立つ場合があります。ただし、それが必ずしも保証されるわけではありません。たとえば、AIの最近の技術的進歩により、より多くの患者が甲状腺がんと診断されるようになりました。しかし、それにもかかわらず、甲状腺がん患者の予後は劇的には改善していません(Ho et al., 2015; van Deen et al., 2023)。さらに、一部のがんは治療なしに自然と消失する場合もあります。この現象は「自然寛解」として知られています(Radha and Lopus, 2021)。科学者たちはこれが稀であると考えていますが、がん診断技術が向上することで、治療を必要としないがんがより多く発見されるかもしれません。そのため、医療従事者が患者と情報を共有する前に、その情報が患者にとって本当に有益かどうかを慎重に評価することが重要です。特に、それが患者を不安にさせる可能性がある場合にはなおさらです。
5. What are the ethical issues related to the use of carebots? / ケアボット(介護ロボット)の使用に関する倫理的問題は何か?
In a rapidly ageing society, for example in Japan, relatively few young people must care for an increasing number of older people. Care robots (or ‘carebots’) have been designed to take over some of this care work (Robson 2019). This raises some dilemmas.
Image 5: CC-BY 2.0; Marco Verch; Älterer Mann sitzt neben humanoidem Roboter in einem Café; https://ccnull.de/index.php/foto/aelterer-mann-sitzt-neben-humanoidem-roboter-in-einem-cafe/1102624
Carebots may be able to provide good care, but it does not seem appropriate to hold the view that they could replace the care that human beings can provide. To lift someone who is very heavy in and out of bed, for example, a carebot might be able to do a very good job. Indeed, it might be able to do a much better job than could be done by a human carer where the latter would lack the physical strength possessed by the AI device. However, a carebot would not seem to be able to provide human care. How could essential attributes of care, such as compassion, love, and tenderness, be shown by an AI device? An AI system might be able to simulate these traits, which may lead the human being who is being cared for to think that they receive real care. Potentially, this might lead to misconceptions about what ‘AI care’ is and about what ‘human care’ is.
6. Is the use of AI for health care sustainable? / ヘルスケア向けAIの使用は持続可能か?
The implementation of large amounts of data into AI systems also raises other issues, for example related to the notion of sustainability. This notion has both a social and an ecological component.
Socially, the question must be asked whether current work that is being carried out in the AI sector can and, more importantly, ought to be sustained. The development of ‘machine learning’, for example, relies on a large number of data being fed into computer systems, which requires a lot of time, and may not be the most interesting type of work. It may also not be paid well, raising the question whether some people are being exploited to develop AI and what ought to be done about this. Heilinger et al. (2024, p. 206) comment, for example that ‘the financial gains made from AI-based systems are distributed extremely unequally, as can be shown by comparing the annual financial gains of, say, Amazon’s CEO on the one extreme end and the underpaid labourers working under dangerous and exploitative conditions in the cobalt mines on the other extreme end’.
Ecologically, the development of AI also raises questions as vast natural resources are used to build hardware. This, as well as the development and use of software, requires energy, where much of this energy is generated by the use of dwindling natural resources, emitting toxic and dangerous gases in the process (Bolón-Canedo et al. 2024).
Image 6: CC BY-NC-SA 4.0; Photo by Etienne Girardet on Unsplash; https://unsplash.com/?utm_source=medium&utm_medium=referral
Reflection:
What do you think should be done to ensure that the development and use of AI systems is sustainable?
考察の時間
AIシステムの開発と使用が持続可能であることを保証するために、どのような対策が必要だと思いますか?
One thing that could and should be done is to try to make AI systems more energy-efficient. However, this does not guarantee that the development of AI systems will become more sustainable. In this regard, Wu et al. (2022) point out that ‘despite the significant operational power footprint reduction, we continue to see the overall electricity demand for AI to increase over time — an example of Jevon’s Paradox, where efficiency improvement stimulates additional novel AI use cases’. Similarly, Heilinger et al. (2024) make the point that AI systems in general are unlikely to be sustainable at the present time as long as the socio-economic situation is one that favours permanent economic growth.
7. Does AI change human self-understandings or identities, and does it do so for better and/or for worse? / AIは人間の自己認識やアイデンティティをどのように変えるのか、そしてそれは良い変化なのか悪い変化なのか?
In spite of this lack of sustainability, Heilinger et al. (2024, p. 210) also claim that ‘AI is likely to increasingly influence and shape human lives in the future’, which raises the question whether it might change human self-understandings or even human nature, and whether it might do so for better or for worse.
Reflect for a moment on how the self-understandings or identities/characters of health care workers and patients who use AI might differ from those who do not use AI systems at all.
While it is hard to say how AI might alter the self-understandings or characters of health care workers and patients, it is likely that the use of AI systems alters people in profound ways. For example, the time that a doctor spends looking at a screen competes with the time that the doctor communicates with patients, which may affect the self-understandings of both. A patient who uses the internet to look up health-related information may also be transformed by what they read, which may influence how they approach a doctor, for example by enabling them to ask more relevant questions. A patient with a prosthetic limb that incorporates features of an AI system may also alter their understandings of themselves as their body is, arguably, enhanced by the prosthetic, which enables the person to move better.
A more profound change might be brought about by a system that detects levels of dopamine in the body and that administers medication to boost dopamine levels in Parkinson’s patients, who may go on to live with fewer symptoms of the disease because of it. An even more significant change might be brought about by an AI system that would detect the presence of particular (quantities of) hormones in the body, for example oxytocin, and release chemicals to restore any imbalances at appropriate times to facilitate pro-social behaviour. The identity of a person who may be quite anti-social might be altered quite significantly if they suddenly become more pro-social because of it.
Image 7: CC0; Low Section of Person with Prosthetic Leg; https://www.pexels.com/photo/low-section-of-person-with-prosthetic-leg-9623428/
AI is not the only technology that can profoundly impact human identity. For most of evolutionary history, human beings developed in environments that did not provide artificial light, for example. This raises the question whether we are adapted to living in such luminous environments, even if such environments have also provided significant benefits. Like artificial light, AI is likely to have transformed human beings significantly. This raises other interesting questions.
How are different artificial things connected to each other?
How is what is artificial related to what is moral?
You might explore this theme further in Deckers 2023: chapter 8.3.
考察の時間
以下の質問について考えてみてください。
何が「人工的」であると言えるのでしょうか?
さまざまな人工物同士はどのように関連しているのでしょうか?
人工的なものと道徳的なものはどのように関係しているのでしょうか?
このテーマについては、Deckers(2023)の第8章3節でさらに詳しく探求することができます。
8. Do people produce conscious entities when they create AI systems? / AIシステムを作成する際、人々は意識を持つ存在を生み出しているのか?
Another interesting question is whether advances in AI technology might alter the AI systems that are being designed in very significant ways. Some scholars suggest that, at some point in the future, AI systems might perhaps become conscious or have experiences (Llorca Albareda et al. 2024). As consciousness/experience has been associated with (greater) moral status, AI systems might then perhaps no longer be regarded as objects that one can manipulate freely (objects that should be valued for their instrumental value), but as sentient entities that must be regarded as having a higher moral status because of it (that should be valued as ‘moral patients’ for their intrinsic value) (Deckers 2023: chapter 1.4).
Image 8: CC BY-ND 4.0; commander data star trek emoji | AI Emoji Generator; https://emojis.sh/
A different question is whether they might even be subjects with moral agency, or moral agents. Jecker and Nakazawa (2022, p. 770) remark that ‘the Japanese saying, Yoyorozu-no-Kami (8 million gods), expresses the idea that the Japanese see gods in everything’, and they suggest that Japanese people may be more open compared to many others to the idea of some AI systems being animated. Imagine an AI system that looks a great deal like a real doctor, for example a social robot with human-like features (for example, like Lieutenant Commander Data, an android or humanoid robot featuring in the Star Trek television series), such as a voice that simulates a human voice and a shape that resembles that of a human being. Some patients may come to regard such an AI machine as a real doctor, but would it be a real doctor (Llorca Albareda et al. 2024)?
一方で、それらが道徳的主体ではなく、むしろ道徳的責任を負う「道徳的エージェント」になり得るかどうかという別の問いも存在します。Jecker and Nakazawa(2022)は、「日本の言葉である『八百万の神(やおよろずのかみ)』は、あらゆるものに神性が宿るという考えを表しており、日本人は多くの他国民よりもAIシステムを生命的存在として捉える可能性がある」と述べています。たとえば、人間に似た特徴(たとえば人間の声に似た音声や、人間の形状を模倣した外観)を持つ社会的ロボットが登場すれば、患者の中にはこのようなAI機械を本物の医師としてみなす人もいるかもしれません。しかし、それは本当に「医師」と言えるのでしょうか?(Llorca Albaredaら、2024)
Reflection:
Reflect for a moment on whether an AI machine might ever be a moral decision-maker? If so, who would be to blame if a bad decision was made: the AI system (now regarded as an AI doctor), the company that designed it, the person who decides to use it, or some combination of some of these?
Some nonhuman animals have been put on trial in the past.
過去には、一部の動物(人間ではない)が裁判にかけられた事例もあります。
Image 9: in public domain, from Chambers (1864), depicting a sow and her piglets being tried for the murder of a child. The trial is held to have taken place in 1457: the mother was found guilty and the piglets were acquitted.
Might the time now have come for some AI systems to carry some blame? A lot of philosophical work is currently being done to try to address whether some AI systems might be moral patients or moral agents. The important message for (training) health carers might be this one: the burden of proof should be on those who claim that AI systems might carry (part of) the blame. There does not seem to be sufficient evidence to suggest that human beings should be exculpated from any wrong decisions that they may make with the use of AI systems.
AI systems cannot be blamed as they are not moral agents. Takahiro Nakajima (2024) writes rightly that ‘AI does not understand the “meaning” of the character strings it presents at all’ and that ‘chatting with AI is ultimately a monologue, not a dialogue’. However, one should be mindful that some AI systems designers may try to dodge responsibility for their creations by blaming the systems, rather than those who produced them.
This learning resource was designed to stimulate ethical discussion of how health care might be improved and undermined by the development of artificial intelligence (AI) and how AI systems ought to be designed and used to promote health care. Topics included how to ensure that AI for health care is evidence-based, how one might navigate the black box of AI, how one might use AI data responsibly, what the ethical issues are related to the use of carebots, whether the use of AI for health care is sustainable and what should be done to ensure social and ecological sustainability, as well as whether AI changes us and whether we change the entities used to create AI systems. Health care workers and patients may wonder about many other topics related to AI. However, the main problems and opportunities associated with the use of AI systems in health care may be captured in this resource.
The following list of questions can be used to stimulate personal reflection, classroom discussion or essay writing:
1. What would you do to check whether an AI system that is being used in health care might provide good information?
2. Would you ever use an AI system in health care if you did not understand what the outputs of the system were based on? If so, what would be the conditions for using it?
3. What should health care workers and researchers do to ensure that they use AI data responsibly?
4. Would you ever condone the use of carebots? If so, what would be the conditions?
5. What should designers and users of AI systems in health care, as well as regulators, do to ensure that the use of such systems is sustainable?
6. How might the self-understandings or the nature of health care workers and patients be affected by the use of AI systems in health care?
7. Do you think that people produce conscious entities when they create AI systems, and what might be the moral implications of creating conscious AI systems?
Bolón-Canedo, V., Morán-Fernández, L., Cancela, B. and Alonso-Betanzos, A., 2024. A review of green artificial intelligence: Towards a more sustainable future. Neurocomputing, 599 (September), 1-10.
Chambers, R. 1864. The Book of Days: A Miscellany of Popular Antiquities in Connection with the Calendar, Including Anecdote, Biography, & History, Curiosities of Literature and Oddities of Human Life and Character. W. & R. Chambers.
Deckers, J. 2023. Fundamentals of Critical Thinking in Health Care Ethics and Law. Ghent: Owl Press.
Heilinger, J.C., Kempt, H. and Nagel, S., 2024. Beware of sustainable AI! Uses and abuses of a worthy goal. AI and Ethics, 4(2), 201-212.
Ho, A.S., Davies, L., Nixon, I.J., Palmer, F.L., Wang, L.Y., Patel, S.G., Ganly, I., Wong, R.J., Tuttle, R.M., and L.G. Morris., 2015. Increasing diagnosis of subclinical thyroid cancers leads to spurious improvements in survival rates. Cancer, 121(11): 1793-1799.
Jecker, N.S. and Nakazawa, E., 2022. Bridging east-west differences in ethics guidance for AI and robotics. AI, 3(3), 764-777.
Llorca Albareda, J., García, P. and Lara, F., 2024. The Moral Status of AI Entities. In Lara, F. and Deckers, J. (eds). 2024. Ethics of Artificial Intelligence, Cham: Springer, pp. 59-83.
Lee, E.G., Perini, M.V., Makalic, E., Oniscu, G.C. and Fink, M.A., 2023. External validation of the United Kingdom transplant benefit score in Australia and New Zealand. Progress in Transplantation, 33(1), 25-33.
Loughlin, M., 2008. Reason, reality and objectivity–shared dogmas and distortions in the way both ‘scientistic’and ‘postmodern’commentators frame the EBM debate. Journal of Evaluation in Clinical Practice, 14(5), 665-671.
Moyano-Fernández, C. and Rueda, J., 2024. AI, Sustainability, and Environmental Ethics. In Lara, F. and Deckers, J. (eds). 2024. Ethics of Artificial Intelligence, Cham: Springer, pp. 219-236.
Nakajima, T., 2024. Listening to the Daoing in the Morning. Paper presented at the Kyoto Institute of Philosophy, November 2, 2024.
Radha, G. and Lopus, M., 2021. The spontaneous remission of cancer: Current insights and therapeutic significance. Translational Oncology, 14(9), p.101166.
Robson, A., 2019. Intelligent machines, care work and the nature of practical reasoning. Nursing ethics, 26(7-8), 1906-1916.
Strickland, E., 2019. IBM Watson, heal thyself: How IBM overpromised and underdelivered on AI health care. IEEE Spectrum, 56(4), 24-31.
van Deen, W.K., Spiegel, B.M. and Ho, A.S., 2023. A narrative review of decision aids for low-risk thyroid cancer. Ann Thyroid, 8(3), 1-8.
Von Eschenbach, W.J., 2021. Transparency and the black box problem: Why we do not trust AI. Philosophy & Technology, 34(4), 1607-1622.
Wingfield, L.R., Ceresa, C., Thorogood, S., Fleuriot, J. and S. Knight. 2020. Using artificial intelligence for predicting survival of individual grafts in liver transplantation: a systematic review. Liver Transplantation, 26(7), 922-934.
Wu, C.J., Raghavendra, R., Gupta, U., Acun, B., Ardalani, N., Maeng, K., Chang, G., Aga, F., Huang, J., Bai, C. and Gschwind, M., 2022. Sustainable ai: Environmental implications, challenges and opportunities. Proceedings of Machine Learning and Systems, 4, 795-813.
Available from various locations, for example from here.
General information
As cutting-edge technologies continue to reshape the landscape of health care, we are faced with profound ethical and legal dilemmas on our journey towards a brighter future. This book invites you to develop your critical thinking skills in relation to a number of themes in bioethics and law, including our duties to care for each other, for nonhuman animals, and for the nonhuman world. While the book engages with the law as a source of guidance and food for thought, unlike most publications in health care ethics and law, the emphasis is on the development of critical thinking skills in ethics. Each chapter ends with a list of questions that act as prompts in your own critical thinking journey.
The book is printed on climate-neutral paper. Emissions are offset by supporting a clean drinking water scheme in Zoba Maekel, Eritrea. It supports communities in renovating their boreholes so that people have access to clean water.
I provide the table of contents below, as well as a brief summary of each chapter.
Table of contents
Chapter 1: A short introduction to health care ethics and law
Chapter 2: Autonomy and its limits
Chapter 3: Duties of care, confidentiality, candour, and cost minimisation
Chapter 4: The creation and use of human embryos for human reproduction
Chapter 5: When is it acceptable to use non-human animals to promote human health?
Chapter 6: Research ethics
Chapter 7: Ethics in relation to pregnancy termination
Chapter 8: Is genetic engineering justified?
Chapter 9: Human embryo research in embryonic stem cell and cloning debates
Chapter 10: Ethical and legal issues related to the end of life
Concise summary (chapter-by-chapter)
Chapter 1: A short introduction to health care ethics and law
I argue that there is an urgent need to develop critical thinking skills in health care ethics and law, as the health care needs of a large number of organisms are in jeopardy, in spite of the fact that we have the capacities to address many of them. In order to do so, it is good to reflect upon one’s meta-ethical theory to determine what ethics is about. It is also important to reflect on how one’s values shape one’s principles and theories, and what ethical theory might be best to adopt. While much health care ethics theorising focuses on abstract/formal ethical theories that are applied insufficiently to reality, I argue that it is much more important to reflect upon different axiologies (theories of which concrete things/entities should be valued, and what value each has).
I argue for a theory that includes a deontological (duty-based) and a consequentialist element: the duty to promote positive consequences for one’s own health. This is not accompanied by an individualistic axiology. Rather, this theory is compatible with an axiology that ascribes intrinsic value to all entities. A crucial question here is what the intrinsic values of different things are, and how much value one should give to one entity relative to the value of another entity. Our axiologies are influenced by our reflections on what different entities are, which is the subject of ontology (theory of reality).
I outline two dominant ontologies, mechanistic materialism and dualism. I identify problems with both and sketch an alternative ontology, ‘panexperientialism’, that might both inspire and be inspired by a different outlook on what matters.
The practice of health care ethics is not only shaped by ethics, but also by different health care professions and by the law. This is why health care professionals and patients must take heed of relevant professional guidance and law, while avoiding legalistic approaches to health care.
The chapter concludes by providing some practical tools that can be used in ethical reasoning, including the use of logic, analogies, and thought experiments. These tools are applied to different areas of health care ethics in the ensuing chapters.
Questions raised by this chapter:
1. What are the different meta-ethical theories that have been described in this chapter and why might meta-ethical reflection be important?
2. What is your theory of health care ethics?
3. What does it mean to ascribe intrinsic value, which entities should be valued intrinsically, and how would you weigh up different entities’ values?
4. What ontology do you adopt and how might this inform your ethical theory?
5. What is the relevance of professional guidance and law for health care ethics?
6. Do you agree with the view that logic is important in health care ethics? Justify your answer.
7. Could you provide an example of how an analogy or a thought experiment might be helpful in health care ethics?
8. Why might legalism be a problem?
9. What is your view on the (ir)relevance of slippery slope arguments?
10. ‘Plants are sentient beings. Therefore, plants should be valued intrinsically.’ Do you think that this argument is logically valid?
Chapter 2: Autonomy and its limits
I argue that the concept of autonomy is relevant in health care and that health care professionals should reflect critically on what the law demands from them when human patients are unable to consent due to a lack of autonomy. I also argue that the need to balance the values of autonomy and beneficence can present great difficulties when health care professionals consider the health care interests of children, including their interests in safeguarding. The chapter ends with a discussion of the value of liberty and how it may need to be limited for health reasons in some situations.
Questions raised by this chapter:
1. What should health care professionals do in order to make sure that patients consent?
2. What should health care professionals do in situations where patients lack capacity?
3. Why might it be appropriate for health care professionals to consider advance refusals from patients who lack capacity?
4. In what circumstances would you condone restricting someone’s liberty for health reasons?
5. Do you agree with the view that there are some aspects of care that patients should not be allowed to refuse?
6. How should health care professionals decide whether or not to provide health care treatment to a child?
7. What counts as child abuse?
8. What should health care professionals do when they think that continued treatment of an infant is not in the infant’s best interests and when the parents insist on its continuation?
9. Do you agree with the view that a competent child’s views on medical treatment should be allowed to be overridden?
10. How should a health care professional handle a situation where they discover that a child has been subjected to female genital mutilation?
Chapter 3: Duties of care, confidentiality, candour, and cost minimisation
I discuss the duties of care, confidentiality, candour, and cost minimisation. As health care professionals can fail in these duties intentionally or through being reckless, careful attention must be paid to how these duties can be fulfilled and to how some of these might need to be balanced with other moral considerations.
Questions raised by this chapter:
1. How can health care professionals ensure that they act in accordance with their duties of care?
2. What should be demonstrated to determine whether a health care professional has breached their duty of care?
3. What should health care professionals do to safeguard patients’ right to confidentiality?
4. In what situations might it be appropriate for health care professionals to divulge confidential patient information to third parties?
5. What should a health care professional do if the police ask for information about a patient to investigate a potential offence that took place on a road?
6. How can health care professionals ensure that they act in accordance with their duty of candour?
7. When might it be appropriate to mislead patients?
8. What might be the benefits and disadvantages of using the notion of QALY in decisions about how to allocate funding for different treatments?
9. How would you decide between offering a lung transplant to a 75-year-old person who recently stopped smoking and a 25-year-old person who has never smoked when both are clinically equally suitable for transplantation?
10. Which criteria would you use to discriminate between patients who may need intensive care due to infection with a coronavirus when not all patients can receive treatment on the intensive care unit?
Chapter 4: The creation and use of human embryos for human reproduction
I provide an overview of the views adopted in the Warnock Report and in UK law on the use of embryos for reproductive purposes. I show that the arguments underpinning this framework do not provide a firm foundation for legislation. I recognise that, while it is one thing to undermine a range of arguments that have been used to deny high moral status to the young embryo, it is another matter to make a convincing case for why the young embryo should be granted such status. It is important to recognise that people who debate human embryo research often portray the young embryo as if he or she were an abstract, alien entity, the product of those who experiment with substances in test tubes in laboratories. The moral position that young embryos lack high status might be favoured by this mode of representation. At the same time, however, some modern technologies, for example, ultrasound sonography, allow us to represent embryos and foetuses in more concrete ways than has been possible until recently. This might perhaps make it more likely for some to be able to empathise with them, and prompt them to assign a higher status to them than they might have done otherwise. My view is that we should grant equal moral significance to all human beings. I am uncomfortable with the idea that we should value some human beings more than others. I also argue that health care professionals and patients should consider a number of other issues related to fertility treatments, including the use of PGD, sex selection, the creation of ‘saviour siblings’, mitochondrial donation, and issues related to whom should be able to access (information about) such treatments.
Questions raised by this chapter:
1. What are the main issues associated with the creation and use of human embryos for human reproduction?
2. What is the UK legal framework on embryo research, what are its ethical underpinnings, and how has it influenced other jurisdictions?
3. What is the position on embryo research developed by the Committee of Inquiry into Human Fertilisation and Embryology?
4. How has the Committee of Inquiry into Human Fertilisation and Embryology influenced different laws on embryo research?
5. What is the argument from sentience? Is it valid?
6. What is the argument from individuality? Is it valid?
7. What is the argument from twinning? Is it valid?
8. What are the key issues associated with pre-implantation genetic diagnosis?
9. When, if ever, should pre-implantation genetic diagnosis be acceptable to diagnose disability?
10. When, if ever, should pre-implantation genetic diagnosis be acceptable to diagnose the sex of an embryo?
11. When, if ever, should pre-implantation genetic diagnosis be acceptable to diagnose whether an embryo is a suitable tissue match?
12. When, if ever, should mitochondrial donation be allowed?
13. What should be the conditions for someone to be allowed to receive fertility treatment?
14. What should be the conditions for someone to be allowed to donate gametes?
15. When, if ever, should those who are conceived with donated gametes have access to genetic information about their donors, and what information should they be allowed to access?
Chapter 5: When is it acceptable to use non-human animals to promote human health?
In this chapter I grapple with the question of when it might be acceptable to use non-human animals to promote human health. I start with the observation that people use non-human animals in various ways to promote human health, and explore two common ways in which they are used: their use in research and their use for human nutrition.
With regard to the research usage, I sketch some laws that legislate the use of non-human animals, highlighting in particular that widespread support for the principle of necessity and the 3Rs questions many projects that use non-human animals, given that such animals are poor models for human beings. In addition, I engage with the question whether non-human animals should be used to model human health and illness, even if they might be good models, where I argue that an account of the moral standing of different non-human animals must be based on evolutionism. In this light, it would be particularly problematic to use non-human animals for research that does not benefit them where the animals are closely related to us.
With regard to the human use of non-human animals for food, I argue that, if the underlying reasoning is applied consistently across different domains, EU legislation on the use of non-human animals for research would lend significant support for significant change in laws on the use of non-human animals for food, resulting in a drastic curtailment in the human consumption of animal products. I sketch the moral arguments underpinning qualified moral veganism, which is defended against some challenges. The chapter also considers the ethical issues related to radically novel ways in which animal products could be produced, including the development of lab-grown meat.
Questions raised by this chapter:
1. What might be the reasons behind the fact that most books on health care ethics and law do not consider the use of non-human animals to promote human health? Why do you (dis)agree with them?
2. What moral theory do you advocate in relation to the human use of non-human animals?
3. What are the key issues to consider when human beings use non-human animals for research?
4. What is the relevant law on the use of non-human animals for research, and what legal change do you advocate, if any?
5. What do the positions of Singer, Regan, and Midgley entail for the use of non-human animals for research?
6. What would the EU law on the use of non-human animals for research imply for the human use of non-human animals for food, if the law in relation to the latter was made consistent with the law in relation to the former?
7. What moral reasons might someone adopt in support of carnism and in support of qualified veganism?
8. What arguments could be used to support or undermine the use of non-human animals for human nutrition?
9. How would you evaluate the morality of technologies that aim to produce lab-grown meat?
10. What useful functions, if any, might be fulfilled by committees that evaluate particular projects to use non-human animals? Justify your answer.
Chapter 6: Research ethics
In this chapter I engage with generic issues that apply to research projects, as well as with more specific issues that pertain to research that is carried out in clinical health care contexts. I identify the benefits and disadvantages of different types of clinical studies and discuss whether clinical trials should only take place when there is clinical equipoise. A failure to conduct RCTs in particular may be unethical and may result in a stagnation of ideas, a misplaced trust in unsystematised clinical experience, little development in available treatments, and a waste of resources. I also discuss the relevance of complementary therapies, question the use of alternative treatments, set out why research ethics committees play a valuable role in health care research, and how those who sit on such committees might go about evaluating research projects.
Questions raised by this chapter:
1. Why is consent important in relation to research?
2. Do you think research should ever be allowed without consent from participants? Justify your answer.
3. What do you think about the view that any research should be allowed, as long as participants consent?
4. What are the key ethical features of the relevant laws in relation to health care research?
5. Why should many RCTs never take place? Justify your answer.
6. What safeguards should there be to make sure that RCTs do not expose participants to disproportionate risks?
7. Should people ever be incentivised to participate in research studies?
8. What do you explain to potential participants when you want to recruit them to your study?
9. Should children be allowed to participate in research? Justify your answer.
10. What is your view about the opinion that health care trials should only be allowed if there is clinical equipoise?
Chapter 7: Ethics in relation to pregnancy termination
I propose how abortion legislation in the United Kingdom should be modified if it was informed by the view that all unborn human beings should be granted a right to life that should be allowed to be trumped in a limited number of situations. I argue that the current distinctions in the legal provisions for ‘able’ and ‘disabled’ foetuses as well as for ‘implanted’ and ‘unimplanted’ embryos cannot be maintained, and that greater protection of all human life must be enshrined into law. I also argue that there should only be a limited right to conscientious objection to participate in the provision of abortion services. There should be no right to object conscientiously to providing abortion services when there is a great risk that a pregnant woman’s life might be lost should the pregnancy be continued, and no right to refuse pregnancy counselling and referral of those who satisfy any of the revised legal grounds.
I recognise that, whether abortion law is altered in line with this proposal both depends and should depend on whether a valid democratic process is instigated towards legal reform. It is my hope that, if abortion legislation were amended in accordance with this proposal, health care professionals would provide those services that women should be entitled to, give serious consideration to facilitating or providing abortions that should be allowed, and reject those that should be prohibited.
Questions raised by this chapter:
1. What are the salient points of the law on abortion in the different jurisdictions of the United Kingdom?
2. What should a health care professional consider when a patient requests an abortion?
3. Why might abortion pose a moral problem for health care professionals?
4. What do you think should be the legal boundaries regarding the right to conscientious objection related to abortion?
5. Should abortion be allowed without any restrictions? Justify your answer.
6. What shape should the law on abortion have? Justify your answer.
7. What is your position on the legality of using medicines that might be abortifacient?
8. If one adopts human egalitarianism, would it imply that abortion should never be allowed? Justify your answer.
9. Do you think men should have any say in relation to whether or not an abortion should be allowed? Justify your answer.
10. Should everyone who wants it have free access to IVF treatments? Justify your answer.
Chapter 8: Is genetic engineering justified?
In this chapter I discuss ethical issues related to genetic engineering. While there is no doubt that genetics has advanced our understanding about health and illness a great deal, technologies that use the science of genetics can both promote as well as undermine health. Physical health can be improved and undermined, both directly and indirectly, through genetic engineering. The same applies to mental health. With regard to the mental health impacts of genetic engineering, a significant concern that has received relatively little attention in the literature is the concern that we ought to avoid creating unnatural things, and that genetic engineering is unnatural.
Although nothing is unnatural in the sense that everything is part of nature, I argue that the widely used distinction between the natural and the unnatural is nevertheless not meaningless. A semantic distinction between the natural and the unnatural can be drawn whereby the latter pertains to that which is affected by human culture and the former to everything else. More importantly, I argue that the fact that human culture pervades many natural events does not eliminate the distinction, but that it is appropriate to situate the natural and the unnatural at opposite ends of a spectrum. Where an entity is situated along this spectrum depends on the likelihood with which its specific essence might have come about counterfactually, which in this case means naturally. I distinguish between three gradations of unnaturalness, in spite of this continuity.
This distinction between the natural and the unnatural has moral relevance. While we must adopt a prima facie duty to safeguard the integrity of nature, the integrity of nature should not be protected at all costs. Doing so would stifle all human activity. In order to flourish, Homo faber must alter nature. However, an action that alters a natural entity’s teleology more significantly is, ceteris paribus, more problematic compared to another action.
This discussion is highly relevant to evaluate genetic engineering. As genetic engineering projects normally involve type 1 instances of the unnatural, they are morally suspect. In spite of this, the example of Huntington’s disease shows that this does not imply that genetic engineering is necessarily wrong. However, if a type 2 or type 3 intervention existed that could enhance the quality of life of the person in question equally effectively, we ought to prefer it.
Questions raised by this chapter:
1. Why might the question of what is natural be relevant for a discussion of genetic engineering?
2. Do you agree with the view that there are gradations of artificiality? Justify your answer.
3. How might differences in degrees of naturalness be morally relevant?
4. How might genetic engineering be used to benefit human health?
5. How might genetic engineering undermine human health?
6. Do you approve of the creation of Herman the bull?
7. Would you approve of using genetic engineering on a human embryo to correct the gene that predisposes for Huntington’s disease, if such were possible?
8. What do you think of the view that there is nothing new in genetic engineering as nature has engineered itself for a very long time?
9. What do you think of genetic engineering projects that aim at making some non-human animals better models to study human disease?
10. Would you eat genetically engineered plants or animals? Justify your answer.
Chapter 9: Human embryo research in embryonic stem cell and cloning debates
In this chapter, I provide an overview of the views that have been expressed by advisory bodies and members of Westminster Parliament in support of legal developments to allow research on young human embryos in the United Kingdom. While UK law has inspired similar legal reform in many other countries, this chapter shows that the arguments underpinning this framework do not provide a sound basis for the current legal position. My view on the status of the young human embryo is at odds with the views underpinning this framework. Rather than denying the embryo high moral status, I adopt the view that we should consider all human beings to be equal, rather than make the question of what value should be assigned to a human being dependent on how many properties, capacities, or experiences a human being might possess.
Questions raised by this chapter:
1. How would you sum up the moral reasoning underpinning the Human Fertilisation (Research Purposes) Regulations 2001, and what do you make of the arguments that were developed to support these?
2. What are the two arguments from potentiality in relation to the status of the young human embryo and do you think that these arguments are sound?
3. What is the argument from capacities in relation to the status of the young human embryo and why do you (dis)agree with this argument?
4. What is the argument from probability in relation to the status of the young human embryo and why do you (dis)agree with this argument?
5. What is the argument from mourning in relation to the status of the young human embryo and why do you (dis)agree with this argument?
6. What is the argument from ensoulment in relation to the status of the young human embryo and why do you (dis)agree with this argument?
7. What policy would you like to adopt in relation to human embryo research? Justify your answer.
8. Would you favour altering the law on human embryo research so that human embryos can be used for research when they are older than 14 days? Justify your answer.
9. What is the relevance of the scientific advances that have been developed on the basis of human embryo research for the ethics of embryo research?
10. Do you agree with laws that allow the creation of human admixed or hybrid embryos? Justify your answer.
Chapter 10: Ethical and legal issues related to the end of life
In this chapter I consider when treatment might be futile, whether it may ever be appropriate to withhold or to withdraw treatment from a patient, whether pain relief that might hasten one’s death should be taken or provided, whether physician-assisted suicide and euthanasia should be legal options, and how health care professionals might cater for the spiritual needs of patients. These issues are difficult and emotionally challenging. In a culture where ageism is challenged and where speaking about death and the dying process might be more widely accepted, there is a good chance that people may feel better able to cope with the prospect of dying and with making decisions that promote well-being when it is hard to do so.
Questions raised by this chapter:
1. How might health care professionals go about determining whether or not a treatment is futile?
2. How might a health care professional justify withdrawing treatment from a patient?
3. Do you agree with the withdrawal of treatments for patients who are in a persistent vegetative state? How might you try to justify your answer?
4. Do you think that there are aspects of care that should never be withheld or withdrawn from patient, and if so, which aspects? How would you justify this?
5. What do you think of the view that English law on assisting suicide discriminates against disabled people?
6. Do you think assisting suicide should be allowed? Justify your answer.
7. Do you think euthanasia should be allowed? Justify your answer.
8. If assisting suicide were allowed, what do you think should be the conditions?
9. If euthanasia were allowed, what do you think should be the conditions?
10. Do you think there may be situations where those who aid in the suicide of a patient should (not) be prosecuted?
11. How might the doctrine of double effect be applied to the provision of pain relief to a dying patient?
12. Do you agree with the view that withdrawing artificial hydration and nutrition from a terminally ill patient should always be accompanied by terminal sedation?
13. What should health care professionals do when the parents of competent children demand life-saving treatment that the child refuses?
14. What do you think of the view that good palliative care is always preferable to treating the patient in order to end their life?
15. How might health care professionals optimally look after the spiritual needs of patients who adopt Christianity, Islam, Hinduism, Sikhism, Judaism, or Buddhism?
What do others think?
Monica Consolandi, Fondazione Bruno Kessler: See Monica’s review in the journal Theoretical Medicine and Bioethics here.
Julia Hynes, Kent and Medway Medical School: “In my view this is a book which will prove to be of benefit to healthcare students nationally, and even internationally, as it teaches the student how to think in a critical fashion. It may be used as a core text or one of two core texts. With prescribed reading set and with tutorial facilitation, it will enable the student to create and analyse philosophical arguments through the question prompts at the end of each chapter.”
Ghaiath Hussein, Trinity College Dublin: “What a fantastic achievement, … this book will undoubtedly be a valuable resource for us and our students.”
Ghosting the shell: exorcising the AI persona from voice user interfaces
12.00-12.30
Emily Postan
Good categories & uncanny kinds: ethical implications of health categories generated by machine learning
12.30-13.30
Lunch break
Lunch break
13.30-14.00
Emma Gordon
Moral expertise and Socratic AI
14.00-14.30
Shauna Concannon
Living well with machines: critical perspectives on communicative AI
14.30-15.00
Angus Robson
Moral communities in healthcare and the disruptive potential of advanced automation
15.00-15.30
Andrew McStay
Automating Empathy and the Public Good: The Unique Case of Health
15.30-16.00
Koji Tachibana
Virtue and humanity in an age of symbiosis with AI
16.00-16.30
Jamie Webb
Trust, machine learning, and healthcare resource allocation
16.30-17.00
Jan Deckers
closure
18.00 onwards
Social dinner for presenters
Abstracts
Stephen McGough and Jan Deckers: Introduction
As a scholar researching the areas of machine learning, big data, and energy efficient computing, Stephen will provide a sketch of what AI actually is. Jan will introduce the day, looking back at the presentations at the first event at Chiba University, and looking ahead at the presentations that will follow today.
Adam Brandt & Spencer Hazel: Ghosting the shell: exorcising the AI persona from voice user interfaces
Voice-based Conversational User Interfaces (CUIs or VUIs) are becoming increasingly ubiquitous in our everyday interactions with service providers and other organisations. To provide a more naturalistic user experience, Conversation Designers often seek to develop in their conversational agents what has been glossed as humanlikeness, namely design features that serve to anthropomorphise the machine. Guidelines suggest for example that designers invest time in developing a persona for the conversational agent, selecting an appropriate persona for the target user, adapting conversational style to the user (for discussion, Murad et al., 2019). Such anthropomorphic work may mitigate somewhat for the challenges of requiring users to behave as if they are speaking with a human rather than at a machine. However, in attempting to trigger a suspension the disbelief on the part of the user, allowing them to entertain the idea that what they are speaking at has some kind of sentience, raises a number of ethical concerns, especially where users may end up divulging more than they would like.
This presentation reports on work carried out by the authors in partnership with healthcare start-up Ufonia. The company has developed a voice assistant, Dora, for carrying out routine clinical conversations across a number of hospital trusts (see Brandt et al. 2023). Using insights and methods from Conversation Analysis, the collaboration has worked to ensure that these conversation-framed user activities achieve the institutional aims of the phone-call, while providing as the same time an unchallenging experience for the user. Conscious of the ethical dimension to which this gives rise, we discuss here how we help to pare down the importance of agent persona (managing the risk of users treating the agent as a person), while curating a more natural user experience by modelling the speech synthesis on the normative patterns of everyday (institutional) talk.
References:
Brandt, A., Hazel, S., Mckinnon, R., Sideridou, K., Tindale, J. & Ventoura, N. (2023) ‘From Writing Dialogue to Designing Conversation: Considering the potential of Conversation Analysis for Voice User Interfaces’, in [Online]. 2023
Murad, C., Munteanu, C., Cowan, B.R. & Clark, L. (2019) Revolution or Evolution? Speech Interaction and HCI Design Guidelines. IEEE Pervasive Computing. 18 (2), 33–45.
Emily Postan, Good categories & uncanny kinds: ethical implications of health categories generated by machine learning
One area in which there is particular interest is health applications of machine learning (ML) is in assisting not only in the detection of disease or disease risk factors, but also by generating new or refined diagnostic, prognostic, risk, and treatment response categories. Deep learning (DL), a powerful subset of ML, potentially offers marked opportunities in this area. This paper interrogates the ways in which uses of DL in this way may result not only in the classification of images, risks, or diagnoses, but also in reconfigured and novel ways of categorising people. It asks why categorisation by AI – and deep learning algorithms in particular – might have ethically significant consequences for the people thus categorised. The paper approaches these questions through the lens of philosophical treatments of ‘human kinds’. It asks to what extent AI-generated categorisations function, or could come to function, as human kinds. More specifically, it seeks to characterise how human kinds predicated on machine learning algorithms might differ, and differ in ethically significant ways, from existing human kinds that come about through social and historical processes and practices. It explores the potential impacts of AI-generated kinds on members’ experiences of inhabiting and exercising agency over their own categorisations. As such this paper pursues a line of ethical inquiry that is distinct from, though complementary to, more familiar concerns about the risks of error and discrimination arising from AI-enabled decision-making in healthcare. The paper concludes that while the impacts of machine learning-generated person-categorisations may not be unequivocally negative, their potential to alter our identity practices and group memberships needs to be weighed against the assumed health dividends and accounted for in the development and regulation of trustworthy AI applications in healthcare.
Emma Gordon: Moral Expertise and Socratic AI
A central research question in social epistemology concerns the nature of expertise and the related question of how expertise in various domains (epistemic, moral, etc.) is to be identified (e.g., Goldman 2001; Quast 2018; Stichter 2015; Goldberg 2009). Entirely apart from this debate, recent research in bioethics considers whether and to what extent cognitive scaffolding via the use of artificial intelligence might be a viable non-pharmaceutical form of moral enhancement (e.g., Lara and Deckers 2020; Lara 2021; Gordon 2022; Rodríguez-López and Rueda 2023). A particularly promising version of this strategy takes the form of ‘Socratic AI’ — viz., an ‘AI assistant’ that engages in Socratic dialogue with users to assist in ethical reasoning non-prescriptively. My aim will be to connect these disparate strands of work in order to investigate whether Socratic-AI assisted moral enhancement is compatible with manifesting genuine moral expertise, and how the capacity of Socratic AI to improve moral reasoning might influence our criteria for identifying moral experts.
Shauna Concannon: Living well with machines: critical perspectives on communicative AI
Artificial Intelligence (AI) is having an ever-greater impact on how we communicate and interact. Over the last few years, smart speakers, virtual personal assistants and other forms of ‘communicative AI’ have become increasingly popular and chatbots designed to support wellbeing and perform therapeutic functions are already available and widely used. In the context of health and social care, attention has begun to focus on whether an AI system can perform caring duties or offer companionship. As machines are positioned in increasing relational roles, what are the societal and ethical implications and should these interactions be modelled on human-human interaction?
In this talk, I will review recent developments in communicative AI, ranging from empathetic chatbots to storytelling applications tailored for children. Through this exploration, I will examine key risks and the potential for harm that these innovations may entail, and consider the implications that arise due to the ontological differences between human-machine and human-human communication. Finally, I will consider what is required to guide more intentional design and evaluation of these systems, with a more central focus on interaction, moral care and social conduct.
Angus Robson: Moral communities in healthcare and the disruptive potential of advanced automation
The importance of healthcare organizations as moral communities is increasingly supported in recent research. Advanced automation has potential to impact such communities both positively and negatively. This study looks at one specific aspect of such potential impacts, which is here termed second-person displacement. Using Stephen Darwall’s idea of second-person standpoint, interpersonal events are proposed as a basic condition of the moral community which is threatened by automation in certain conditions of second-person displacement. This threat occurs in two particular respects, pervasiveness and humanness. If these two kinds of threat are understood and acknowledged, they can be mitigated by good design and planning. Some principles are suggested to assist strategies for responsible management of increasingly advanced automation, including protection of critical contact, building the resilience of the moral community and resisting deception.
Andrew McStay: Automating Empathy and the Public Good: The Unique Case of Health
Once off-limits, the boundaries of personal space and the body are being probed by emotional AI and technologies that simulate properties of empathy. This is occurring in worn, domestic, quasi-private and public capacities. The significance is environmental, in that overt and ambient awareness of intimate dimension of human life raises questions about human-system interaction, privacy, security, civic life, influence, regulation, and moral limits regarding the datafication of intimate dimensions of human life. This talk will discuss the historical context to these technologies, current technological development, existing and emergent use cases, whether remote profiling of emotion and datafied subjectivity is acceptable, and (if so) on what terms. Problematic in method and application, especially commercial uses, the use of automated empathy in healthcare is observed to raise unique questions. Tabling some of the legal and moral concerns, the talk will also flag citizen opinion obtained by the Emotional AI Lab (that McStay directs) on use of automated empathy and emotional AI in healthcare. This will be argued to be instructive for debate on the moral limits of automating empathy.
Koji Tachibana: Virtue and humanity in an age of symbiosis with AI
The social implementation of AI will accelerate in the near future. In such a society, we humans will communicate, work and live together with AI. What is human virtue or humanity in such a symbiotic way of living with AI? Examining several discussions of possible social implementations, I consider this question and argue that symbiotic life with AI is an excellent opportunity to understand humanity because we can perceive an absolute difference between humans and AI.
Jamie Webb: Trust, machine learning, and healthcare resource allocation
One use of machine learning in healthcare is the generation of clinical predictions which may be used in healthcare resource allocation decisions. For example, a machine learning algorithm used to predict post-transplant benefit could be involved in decisions regarding who is prioritised for transplant. When conducting interviews with patients with lived experience of high stakes algorithmic resource allocation, patients have occasionally expressed to me that they trusted how prioritisation was determined because they trusted the clinical staff involved in their care. This trust may be grounded in experience of compassionate and competent care. However, these demonstrations of trustworthiness may have nothing to do with the way algorithmic resource allocation decisions are made. Is this a problem? This presentation will explore this question with the use of philosophical theories of trust and trustworthiness, considering the particular challenges machine learning might bring to patient trust and the clinical staff-patient relationship.
A conference on 26 May 2023 at Nishi-Chiba Campus, Chiba University, Japan
Program: 13:00-13:10 Opening Remarks Koji Tachibana, PhD (Faculty of Humanities, Chiba University, Japan)
13:10-13:50 “Ethical challenges and opportunities related to the use of AI in health care” Jan Deckers, PhD (Faculty of Medical Sciences, the University of Newcastle, UK)
Abstract: Health care decision-making is flawed as health care professionals and patients do not always know all the facts that are clinically relevant, may not be able to interpret facts, and may not be able to evaluate their moral significance. Whilst AI systems may help with health care decision-making by gathering more relevant facts and by helping with interpreting and evaluating data, health (care) may also be undermined by AI. This presentation sketches some significant hurdles that must be overcome to ensure that AI systems promote rather than undermine health (care). These hurdles include rational and emotional ontological confusion about the nature of AI, technological deficiencies, and problems related to how AI systems are being used.
13:50-14:30 “Ethical Issues regarding the application of AI to the healthcare Settings” Eisuke Nakazawa, PhD (Faculty of Medicine, the University of Tokyo, Japan)
Abstract: Implementation of artificial intelligence in psychiatric care will bring innovations that contribute to patient well-being by reducing the burden on physicians and other healthcare professionals and by improving the accuracies of diagnoses and risk predictions. On the other hand, from an ethical standpoint, AI development research needs to include efforts to review opt-out consent from the perspective of the right to control one’s own information, with dynamic consent in the scope to ensure the autonomy of research participants. Medical-technical communication in which consensus is formed in advance between research developers and health care providers and the public, including patients, is necessary, and this is prominently required for the issue of secondary, incidental findings. Issues such as the burden on research participants due to false positives, respect for the right of research participants not to be informed of their results, and the social risk of false positives converge with the issue of how to communicate secondary and incidental findings to research participants. It is not unreasonable to be cautious about returning secondary and incidental findings, especially when adequate communication cannot be ensured.
14:30-15:10 “Exploring the ethics of smart glasses: Navigating the future of wearable tech” Semen Trygubenko, DPhil (Dodrotu Limited, UK)
Abstract: The purpose of this study is to provide an overview of the ethical issues related to the use of smart glasses in order to facilitate decision-making and the formation of knowledge and norms. We identify a wide range of ethical issues, including privacy, safety, justice, change in human agency, accountability, responsibility, social interaction, power, and ideology. The use of smart glasses is expected to impact individual human identity and behavior as well as social interaction, which must be taken into account when developing, deliberating, deciding on, implementing, and using smart glasses. We consider the issues that are applicable generally as well as those that arise in the context of remote-calling functionality available in Ziru AV smart glasses prototype.
15:10-15:30 Tea/Coffee Break
15:30-16:10 “Can ChatGPT serve as a clinical ethics consultant?” Yasuhiro Kadooka, MD, PhD (Faculty of Life-Sciences, Kumamoto University, Japan)
Abstract: Generally, healthcare professionals should make a well-balanced value judgment by consulting with colleagues or specialists when confronted by ethically uncertain situations. Currently, some professionals may ask a conversational large language model AI easily. This descriptive research aimed to explore the performance of ChatGPT on clinical ethics consultation, which is an advisory service to support healthcare professionals and patients in identifying, analyzing and resolving ethical dilemmas/issues of daily care. Human clinical ethics consultants participated and asked ChatGPT for advices on an ethically appropriate action in a hypothetical vignette. All conversations between the consultants and ChatGPT were recorded and analyzed qualitatively. Tentatively, this study emphasizes that the conversational large language model AI can instruct general principles/norms of clinical ethics, but may fail to make a holistic assessment of individual patients. The analysis is still ongoing. Detailed findings will be presented at the session.
16:10-16:50 “Upgrading feelings” Jasmin Della Guardia, MS (Graduate School of Humanities, Chiba University, Japan)
Abstract: Fiction tells us stories about how AI can improve humans by taking humans to the next level, e.g. with Human Brain Interfaces like in Neon Genesis Evangelion, Iron Man or the Borgs in Star Trek. Such fictions portray an exaggerated duality of the AI, making us either superhuman or evil juggernauts. However, fiction meets reality, because every day AI merges more and more with our lives as tools (AI filters and art) or as (autonomous) operators (in cars, space travel, and robots; e.g. space robot “CIMON” whose should cheer up astronauts). The fears and dangers are also real and force a debate because this technology is changing the way we think about us and also contains human errors. But we are already cyborgs and AI is human too, so we need to discuss social, psychological, and ethical consequences. To avoid dystopian developments, we need to discuss how enhancing physical ability, attractiveness, creativity, and psychological well-being with AI can make us better people. As an example, we want to examine the influence of AI filters and interactions with AI as a social other on psychological well-being, the philosophical image of man and self-perception.
16:50-17:30 “What can humans learn from AI about creativity as an intellectual virtue?” Ryo Uehara, PhD (Faculty of Informatics, Kansai University, Japan)
Abstract: Creativity has long been an object of consideration in philosophy, especially intellectual creativity as one of the intellectual virtues in virtue epistemology. On the other hand, recent artificial intelligence has shown remarkable creative abilities. Artificial intelligence, being an artifact, cannot be considered to have virtue. Nevertheless, it is expected that humans can learn something about the exercise of creativity as an intellectual virtue from artificial intelligence. This presentation will organize the debate on the creativity of artificial intelligence and clarify the differences from the creativity that can be demonstrated by humans. It will then examine how artificial intelligence can be used to help humans cultivate creativity as an intellectual virtue.
17:30-17:40 Closing Remarks & Announcements
18:20- Social Dinner (T.B.A)
Sponsors: The Great Britain Sasakawa Foundation & JSPS KAKENHI (20H01178)
In the 2015 Paris Agreement, 196 countries pledged to limit global warming to below 2 degrees Celsius, and preferably to 1.5 degrees Celsius, compared to pre-industrial levels. To achieve the latter goal, countries’ total emissions would need to be reduced by some 45% in 2030 compared to levels in 2010. The COP 26 meeting at Glasgow provided an opportunity to develop a strategy to achieve this, but it failed to do so. Consequently, we are not on track to meeting the Paris Agreement goals.
The food system, holistically considered, accounts for around a quarter of all anthropogenic emissions and can contribute greatly to attempts at mitigation due to the great potential of better land management in storing carbon. Recognising the role of agriculture in tackling climate change, the COP 23 meeting, held in Fiji in 2017, decided on a Koronivia Joint Work on Agriculture. Despite this initiative, in Glasgow, relatively few discussions took place on the role of the food system in relation to climate change, and even fewer considered the important contribution played by the consumption of animal products. In my previous post I pointed out that many non-vegan diets compare poorly with vegan diets when we consider the climate change impacts of human dietary choices as they contribute disproportionately to the release of carbon dioxide, nitrous oxide, and methane, and squander opportunities for carbon sequestration. In this post I report on some discussions that took place at COP 26 in relation to the consumption of animal products.
In early November 2021, thousands of people came together in Glasgow, at the 2021 United Nations Climate Change Conference, more commonly known as COP 26, to develop work on the 2015 Paris Agreement. The central goal of this agreement was to avoid driving up temperatures by more than 1.5° C relative to the pre-industrial level. This means that average emissions, measured in carbon dioxide equivalents per person annually, should be no more than about 2 tonnes. As average emissions are currently more than twice that, we are a long way from that goal.