Learning resource on ethics, health care, and artificial intelligence

倫理、ヘルスケア、および人工知能に関する学習教材

© 2024, Jan Deckers and Eisuke Nakazawa

Preliminary note / はじめに

This learning resource was developed by Jan Deckers with advice from Eisuke Nakazawa, who wrote the Japanese translation. The Great Britain Sasakawa Foundation provided the funding. This learning material was developed primarily to help students, particularly those in various health care disciplines, to think through some of the moral dilemmas associated with the use of artificial intelligence (AI) in health care. It may appeal also to many other students and scholars. You can work through this resource at your own pace. It may take you around 3 hrs to do this. The text is interspersed with moments of reflection where you might like to pause and think for a while, before moving on.

この学習リソースは、日本語訳を書いたEisuke Nakazawaのアドバイスを受けてJan Deckersによって開発されました。グレイトブリテン・ササカワ財団が資金を提供した。この学習教材は、特にヘルスケア分野の学生が、人工知能(AI)の使用に関連する倫理的なジレンマを考察できるよう支援するために開発されました。また、多くの他分野の学生や研究者にとっても興味深い内容となる可能性があります。この教材は、自分のペースで学ぶことができますが、すべての内容を通読するには約3時間かかると見積もられます。教材中には、考える時間を設けるための「考察の時間」が挟まれています。進む前にひと息ついて考えてみてください。

1. Objectives / 目的

This learning resource has been developed to stimulate ethical discussion of how health care might be improved and undermined by the development of artificial intelligence (AI) and how AI systems ought to be designed and used to promote health care. While recognising that all biological organisms have interests in health, the focus here is primarily on human health care.

この学習教材は、AIの開発がどのようにヘルスケアを改善し、あるいは損なう可能性があるか、またAIシステムをどのように設計・使用するべきかについて倫理的議論を喚起することを目的としています。本教材は、およそ生物であれば自分の健康に関心を持っていることを認識しつつも、主に人間のヘルスケアに焦点を当てています。

The following questions will be discussed:

  • How to ensure that AI for health care is evidence-based?
  • How to navigate the black box of AI?
  • How to use AI data responsibly?
  • What are the ethical issues related to the use of carebots?
  • Is the use of AI for health care sustainable?
  • Does AI change human self-understandings or identities, and does it do so for better and/or for worse?
  • Do people produce conscious entities when they create AI systems?

この教材では、以下の問を考えていきます。

  • ヘルスケア向けAIがエビデンスに基づいていることをどのように保証するか?
  • AIのブラックボックス問題にどう対処するか?
  • AIデータをどのように責任を持って使用するか?
  • ケアボット(介護ロボット)の使用に関する倫理的問題は何か?
  • ヘルスケア向けAIの使用は持続可能か?
  • AIは人間の自己認識やアイデンティティをどのように変えるのか、そしてそれは良い変化なのか悪い変化なのか?
  • AIシステムを作成する際、人々は意識を持つ存在を生み出しているのか?

2. How to ensure that AI for health care is evidence-based? / ヘルスケア向けAIがエビデンスに基づいていることをどのように保証するか?

Those who design AI systems for health care should use evidence to develop such systems, but they may not be willing to do so. While disease prevention and health promotion should, arguably, be guiding the development of AI in health care, AI designers are not always motivated primarily by these goals due to the existence of conflicts of interests.

ヘルスケア向けAIシステムを設計する者は、そうしたシステムを開発する際にエビデンスを使用するべきです。しかし、彼らが必ずしもそのようにするとは限りません。本来であれば、疾病予防や健康促進がヘルスケアAIの開発を導くべきですが、利益相反が存在するため、AI設計者たちがこれらの目標だけを動機にしているとは限りません。

Reflection:

Reflect for a moment on what sorts of conflicts of interests might drive the development of AI systems for health care?

考察の時間

ヘルスケア向けAIシステムの開発を動機づける可能性のある利益相反には、どのようなものがあると思いますか?

Image 1: CC BY-SA 3.0; Conflict Of Interest by Nick Youngson; Alpha Stock Images; https://www.picpedia.org/keyboard/c/conflict-of-interest.html

Companies that design AI systems for health care know that they should take some interest in health as their systems would be highly unlikely to be used if they undermined health. However, they might be designed to promote the health of some at the cost of that of others. For example, by being designed in such ways that users are likely to buy particular products, the financial interests of (sponsors of) AI companies might be advanced significantly. These financial benefits might promote them, at the expense of many others (Strickland 2019).

企業がヘルスケア向けAIシステムを設計する際、システムが健康を損なうものであれば使用されない可能性が高いため、ある程度健康に関心を持つべきであることは理解しています。しかし、それでも、特定の人々の健康を促進する一方で他の人々の健康を犠牲にするような形で設計されることがあります。たとえば、ユーザーが特定の商品を購入する可能性を高めるように設計された場合、AI企業の(スポンサーの)経済的利益が大幅に促進される可能性がありますが、それが他の多くの人々の利益を損なう結果を招くこともあります (Strickland 2019)。

Even where conflicts of interests are eliminated, what we class as ‘evidence’ will inevitably be biased by the particular conceptions of truth that we all have. An AI designer might, for example, be biased either consciously or unconsciously by the assumption that our understanding of medicine should be based largely on scientific data derived from the study of male bodies, thereby ignoring the possibility that this science may not apply to the bodies of others. The upshot might be an AI system that performs well for males, but less well for others. Such gender bias is common both in medicine and in AI systems.

さらに、利害の衝突が排除された場合でさえ、私たちが「エビデンス」とみなすものは、私たち各々が持つ真実に関する特定の概念に偏っていることは避けられません。たとえば、AI設計者は意識的または無意識的に、「医学の理解は主に男性の身体を対象とした科学データに基づくべきだ」との前提に偏っている可能性があります。その結果、男性には優れた性能を発揮する一方、他の人々にはそうではないAIシステムが生まれるかもしれません。このようなジェンダーに基づく偏りは、医療やAIシステムの両方でよく見られる問題です。

The important thing is to separate unjustifiable from justifiable bias, as well as to avoid discarding sources of bias from cultural traditions that are different from one’s own without serious critical scrutiny. Sometimes, unjustified bias may not be able to be eliminated without causing a greater injustice. In such situations, one must try to recognise (the possibility of) bias, and try to ensure that the injustice is the lesser evil compared to another injustice (Loughlin 2008).

重要なのは、正当化できない偏りと正当化できる偏りを区別し、自分とは異なる文化的伝統からの偏りを、十分に批判的に検討することなく排除しないこです。場合によっては、正当化できない偏りを排除することが、別の大きな不正義を生む場合もあります。このような状況では、偏りの可能性を認識し、その不正義が他の不正義と比べて小さいものであることを確認する必要があります (Loughlin 2008)。

Image 2: CC BY-SA 3.0; Evidence based by Nick Youngson; Pix4free; https://pix4free.org/photo/31575/evidence-based.html

Reflection:

How would you separate evidence from falsehood?

考察の時間

エビデンスと誤情報をどのように区別しますか?

An AI system may recommend that a London bus should be red as it might be ‘trained’ on images of London buses being red. Similarly, it might recommend that Wenxin Keli should be used to treat patients with atrial fibrillation from data that shows that many patients with atrial fibrillation take Wenxin Keli. However, it is important to separate correlation from causation. The fact that two things are frequently correlated does not imply that one thing causes another. To count as evidence in health care, one must establish that taking a particular substance or receiving a particular treatment causes a particular outcome (Deckers 2023: chapter 6, particularly 6.4-6.6).

AIシステムが「ロンドンのバスは赤色であるべき」と推奨する場合、それはロンドンのバスが赤色である画像をもとに「学習」している可能性があります。同様に、AIシステムが心房細動患者に「Wenxin Keli(温心顆粒という中国でしばしば用いられている薬剤)」を使用すべきと推奨する場合、その根拠は多くの心房細動患者がこの薬を服用しているというデータかもしれません。しかし、相関関係が因果関係を示すわけではありません。ヘルスケアにおけるエビデンスとして評価されるには、特定の物質の摂取や治療が特定の結果を引き起こすことを立証しなければなりません(Deckers, 2023: 第6章、特に6.4-6.6)。

3. How to navigate the black box of AI? / AIのブラックボックス問題にどう対処するか?

Even where AI systems may be reliable, they may not be transparent. Some talk about the black box of AI to refer to this lack of transparency (Von Eschenbach 2021). Some AI systems are deliberately designed in that way as the companies that develop them may be keen to protect their intellectual property. They may not like to give away their trade secrets. Although this is understandable, health care workers are challenged with tricky dilemmas here: do they trust AI systems where it is not clear how their outputs (e.g. treatment recommendations) are produced?

AIシステムが信頼できる場合であっても、それが必ずしも透明性を持っているとは限りません。この透明性の欠如を指して、AIの「ブラックボックス」と表現することがあります(Von Eschenbach, 2021)。一部のAIシステムは、設計した企業が知的財産を保護し、自社の機密を守りたいと考えるために、意図的にこのように設計されています。このような理由は理解できるものの、ヘルスケア従事者にとっては厄介なジレンマを引き起こします。たとえば、治療の推奨など、AIシステムがどのように出力を生成しているのか不明確な場合、従事者はそのAIシステムを信頼すべきでしょうか?

Image 3: CC BY-SA 4.0; Krauss Vector: Pduive23; https://upload.wikimedia.org/wikipedia/commons/5/5b/Blackbox3D-withGraphs.svg

Reflection:

Would you ever use an AI system if you were unclear what its recommendations were based on?

考察の時間

もしAIシステムの推奨が何に基づいているのか不明なとき、それでもあなたはそのシステムを使用しますか?

Sometimes, this lack of explainability is not due to the company wanting to hide how systems works, but due to an inherent feature of some systems, for example those that rely on ‘machine learning’. Here, AI systems are fed with huge amounts of data, and the machine ‘learns’ to find patterns in the data that may help health care workers to identify diseases or to make treatment decisions. In the UK, a TBS (‘transplant benefit score’) model is currently being used to help health care workers to make decisions about which patients to allocate organs to (Lee et al. 2023). A new AI system has been shown to be able to do a better job, at least if we class ‘better’ here in terms of organs lasting longer. However, the problem is that those who designed it do not understand how it manages to do this (Wingfield et al. 2020). A patient might ask here why they did not deserve priority over another patient in transplant decisions.

一部のAIシステムにおける説明不可能性は、企業が仕組みを隠そうとしているのではなく、「機械学習」などのシステム固有の特徴によるものです。この場合、AIシステムには膨大なデータが投入され、機械がそのデータのパターンを「学習」することで、疾病の特定や治療方針の決定に役立てます。たとえば、イギリスでは現在、TBS(臓器移植の利益スコア)モデルが、臓器をどの患者に割り当てるかを決定するために使用されています(Lee et al., 2023)。新しいAIシステムは、少なくとも臓器の生着持続期間を基準とする場合、これより優れた結果を出せることが示されています。しかし、問題は、設計者自身がそのAIシステムがどのようにその結果を出しているのか理解していない点です(Wingfield et al., 2020)。このような状況で、移植の優先順位がなぜ他の患者よりも低いのかを尋ねられた場合、患者にどのように説明するのでしょうか?

Reflection:

Reflect for a moment on what a health care worker might tell the patient in this situation?

考察の時間

このような状況で医療従事者が患者に何を伝えるべきか考えてみてください。

The health care worker would need to reply that they do not understand the output either. Rather than health care workers relying blindly on AI systems, research (e.g. randomised controlled trials) might be carried out to check whether a particular AI system might make better decisions compared to other ways in which decisions could be made. Where such trials have been carried out, a health care worker might say to a patient that they do not understand the reason behind the decision, but that a study had shown that the machine performed better than a team of people deciding without the help of AI. They would also need to explain to the patient what criterion or criteria had been used to define ‘better’. It might mean, for example, either that organs last longer or that those who receive the transplanted organs suffer fewer side-effects.  

医療従事者は、この出力の理由を理解していないと答える必要があるかもしれません。しかし、医療従事者がAIシステムを盲目的に頼るのではなく、ランダム化比較試験などの研究を実施することで、特定のAIシステムが他の方法と比較してより良い決定を下すかどうかを確認することができます。このような試験が実施された場合、医療従事者は患者に対し、決定の背後にある理由を理解していないが、研究によりそのAIシステムが従来の方法よりも優れていることが示されたと説明することができます。また、患者に対し、「より優れている」と判断された基準についても説明する必要があります。たとえば、臓器の生着持続期間が長いか、移植を受けた患者の副作用がより少ないかといった基準です。

4. How to use AI data responsibly? / AIデータをどのように責任を持って使用するか?

By using AI systems, health care workers are also able to gather a lot more patient data than they used to be able to, to analyse it better, and to store it more easily for a long period of time. The handling of AI data comes with significant moral risks. These include inadvertent data loss, deliberate sharing of data, and the inappropriate use of data for research. Even if confidentiality is an important value, it may need to be traded off with other values, for example beneficence (well-being). The classical model of consent is that patients should provide consent for specific therapeutic goals or research projects. This might be questioned as many ascribe value to the fact that AI systems can pool data gathered from multiple sources, store it with greater ease, and for a much longer time. Data that may not be useful today may become useful in the future. The usage of data for as yet unspecified research projects (a long time) in the future would be problematic if we hold on to the idea that patients should be allowed to consent only for specified research projects.

AIシステムを使用することにより、医療従事者はこれまで以上に多くの患者データを収集し、それをより詳細に分析し、長期間簡単に保存することが可能になります。しかし、AIデータの取り扱いには重大な倫理的リスクが伴います。これには、データの偶発的な損失、故意のデータ共有、研究へのデータの不適切な使用などが含まれます。機密保持は重要な価値ですが、しばしば他の価値、たとえば「善行」(患者のウェルビーイング)との間でバランスを取る必要があります。従来の同意モデルでは、患者は特定の治療目的や研究プロジェクトに同意を与えるべきとされています。しかし、AIシステムがさまざまな情報源から収集されたデータを統合し、それをより容易に、また長期間保存できる点に価値を見出す人も多くいます。現在の段階では役立たないデータが、将来においては有益になる可能性があるのです。今後の未特定の研究プロジェクトのために長期間データを使用することは、「患者は特定の研究プロジェクトにのみ同意すべき」という考え方に基づけば問題となるかもしれません。

Imagine that an AI system helps you to find out that a particular patient is at greater risk of developing cancer compared to the average patient, for example through the AI system analysing particular images of the patient’s tissues and finding that some are pre-cancerous. The patient does not know about this.

次のような状況を想像してください。AIシステムが患者の組織の特定の画像を分析し、一部が前がん状態であることを発見することで、ある患者が平均的な患者よりもがん発生のリスクが高まっていると判断したとします。しかし、この情報が患者には知らされないとしたらどうでしょう。

Image 4: CC0 1.0; cancer cells as viewed under a microscope; rawpixel.com

Reflection:

Reflect for a moment on what you might do with this newly acquired information.

考察の時間

この新たに得られた情報をどのように扱うべきか考えてみてください。

This finding might be highly relevant for the patient if the patient could be screened for cancer more regularly because of this finding, and if early detection of cancer might help in the treatment. This is not always guaranteed. For example, thanks to recent technological breakthroughs in AI, more patients can be diagnosed with thyroid cancer. However, in spite of this, prognoses for patients suffering from thyroid cancer have not improved significantly (Ho et al. 2015; van Deen et al. 2023). Moreover, some cancers can resolve without any treatment, a phenomenon known as spontaneous remission (Radha and Lopus 2021). While scientists think that this is rare, if technology to diagnose cancer is improved, more cancers that might resolve without any treatment may be identified. Before health care workers decide to share information with patients, it is therefore important to assess whether it might be better for the patient not to receive information, particularly when it might make them anxious.

この発見は、患者にとって非常に重要である可能性があります。たとえば、この情報をもとに患者が定期的にがんスクリーニングを受けることで、早期発見が治療に役立つ場合があります。ただし、それが必ずしも保証されるわけではありません。たとえば、AIの最近の技術的進歩により、より多くの患者が甲状腺がんと診断されるようになりました。しかし、それにもかかわらず、甲状腺がん患者の予後は劇的には改善していません(Ho et al., 2015; van Deen et al., 2023)。さらに、一部のがんは治療なしに自然と消失する場合もあります。この現象は「自然寛解」として知られています(Radha and Lopus, 2021)。科学者たちはこれが稀であると考えていますが、がん診断技術が向上することで、治療を必要としないがんがより多く発見されるかもしれません。そのため、医療従事者が患者と情報を共有する前に、その情報が患者にとって本当に有益かどうかを慎重に評価することが重要です。特に、それが患者を不安にさせる可能性がある場合にはなおさらです。

5. What are the ethical issues related to the use of carebots? / ケアボット(介護ロボット)の使用に関する倫理的問題は何か?

In a rapidly ageing society, for example in Japan, relatively few young people must care for an increasing number of older people. Care robots (or ‘carebots’) have been designed to take over some of this care work (Robson 2019). This raises some dilemmas.

急速に高齢化が進む社会、たとえば日本では、比較的少ない若者が増加する高齢者人口を支えなければなりません。このような状況で、ケアロボット(または「ケアボット」)は、一部の介護業務を代替する目的で設計されています(Robson, 2019)。しかし、これによりいくつかのジレンマが生じます。

Reflection:

Reflect on the following questions:

  • Are carebots able to provide good care?
  • Might there be situations where they cannot do so?
  • Might the extensive use of carebots in the (health) care sector lead to problematic ideas about what human care should consist of?

考察の時間

以下の問いについて考えてみてください。

  • ケアボットは良いケアを提供することができるでしょうか?
  • どのような状況ではそれができない可能性があるでしょうか?
  • ヘルスケア分野でケアボットを広く使用することが、人間によるケアの本質に関する問題を引き起こす可能性はありますか?

Image 5: CC-BY 2.0; Marco Verch; Älterer Mann sitzt neben humanoidem Roboter in einem Café; https://ccnull.de/index.php/foto/aelterer-mann-sitzt-neben-humanoidem-roboter-in-einem-cafe/1102624

Carebots may be able to provide good care, but it does not seem appropriate to hold the view that they could replace the care that human beings can provide. To lift someone who is very heavy in and out of bed, for example, a carebot might be able to do a very good job. Indeed, it might be able to do a much better job than could be done by a human carer where the latter would lack the physical strength possessed by the AI device. However, a carebot would not seem to be able to provide human care. How could essential attributes of care, such as compassion, love, and tenderness, be shown by an AI device? An AI system might be able to simulate these traits, which may lead the human being who is being cared for to think that they receive real care. Potentially, this might lead to misconceptions about what ‘AI care’ is and about what ‘human care’ is.

ケアボットが良いケアを提供できる場合もありますが、人間が提供できるケアを完全に代替できると考えるのは適切ではないように思われます。たとえば、非常に重い人をベッドから持ち上げて移動させる場合、ケアボットは非常に良い仕事をするかもしれません。実際、肉体的な力が不足している人間の介護者よりも優れた結果を出す場合もあります。しかし、ケアボットが人間のケアを提供することは難しいでしょう。ケアの本質的な要素である「思いやり」「愛情」「優しさ」をAIデバイスが示すことはどうすれば可能でしょうか?AIシステムがこれらの特性をシミュレーションすることはできるかもしれません。それにより介護を受ける人が本物のケアを受けていると感じることさえあるでしょう。しかしその結果、「AIによるケア」と「人間によるケア」が何であるかについて誤解を生じさせる可能性が出てきます。

6. Is the use of AI for health care sustainable? / ヘルスケア向けAIの使用は持続可能か?

The implementation of large amounts of data into AI systems also raises other issues, for example related to the notion of sustainability. This notion has both a social and an ecological component.

膨大な量のデータをAIシステムに取り込むことは、社会的および環境的な観点から「持続可能性」という概念に関連する問題を引き起こします。

Socially, the question must be asked whether current work that is being carried out in the AI sector can and, more importantly, ought to be sustained. The development of ‘machine learning’, for example, relies on a large number of data being fed into computer systems, which requires a lot of time, and may not be the most interesting type of work. It may also not be paid well, raising the question whether some people are being exploited to develop AI and what ought to be done about this. Heilinger et al. (2024, p. 206) comment, for example that ‘the financial gains made from AI-based systems are distributed extremely unequally, as can be shown by comparing the annual financial gains of, say, Amazon’s CEO on the one extreme end and the underpaid labourers working under dangerous and exploitative conditions in the cobalt mines on the other extreme end’.

社会的に生じてくる問いは、現在AI分野で行われている作業が継続可能かというもの、さらにもっと重要なものとして、そもそも継続されるべきかという問いです。たとえば、「機械学習」の開発では、大量のデータをコンピュータシステムに入力する必要があり、多くの時間が必要です。この作業は最も興味深い種類の仕事とは言えないかもしれず、十分な賃金が支払われない場合もあります。そのため、AIの開発において一部の人々が搾取されている可能性があり、この問題に対してどのような対策を講じるべきかが問われます。Heilingerら(2024)は次のようにコメントしています。「AIベースのシステムから得られる経済的利益は極めて不平等に分配されており、その例としてAmazonのCEOが得る年間収益と、危険で搾取的な条件で働くコバルト鉱山の労働者を比較することができる。」

Ecologically, the development of AI also raises questions as vast natural resources are used to build hardware. This, as well as the development and use of software, requires energy, where much of this energy is generated by the use of dwindling natural resources, emitting toxic and dangerous gases in the process (Bolón-Canedo et al. 2024).

環境的には、AIの開発も大きな課題を引き起こします。たとえば、ハードウェアの製造には膨大な天然資源が必要です。さらに、ソフトウェアの開発および使用もエネルギーを必要とし、多くのエネルギーが有限の天然資源を利用して生成されており、その過程で有毒で危険なガスが放出されています(Bolón-Canedoら、2024)。

Image 6: CC BY-NC-SA 4.0; Photo by Etienne Girardet on Unsplash; https://unsplash.com/?utm_source=medium&utm_medium=referral

Reflection:

What do you think should be done to ensure that the development and use of AI systems is sustainable?

考察の時間

AIシステムの開発と使用が持続可能であることを保証するために、どのような対策が必要だと思いますか?

One thing that could and should be done is to try to make AI systems more energy-efficient. However, this does not guarantee that the development of AI systems will become more sustainable. In this regard, Wu et al. (2022) point out that ‘despite the significant operational power footprint reduction, we continue to see the overall electricity demand for AI to increase over time — an example of Jevon’s Paradox, where efficiency improvement stimulates additional novel AI use cases’. Similarly, Heilinger et al. (2024) make the point that AI systems in general are unlikely to be sustainable at the present time as long as the socio-economic situation is one that favours permanent economic growth.

ひとつ可能な対策は、AIシステムをよりエネルギー効率の良いものにすることです。しかし、これによってAIシステムの開発がより持続可能になることは保証されません。この点について、Wuら(2022)は次のように指摘しています。「運用のエネルギー消費が大幅に削減されたにもかかわらず、AIの電力需要は時間とともに増加し続けている。これは効率の向上が新たなAIの利用場面を惹起するというJevonの逆説の一例である。」同様に、Heilingerら(2024)は、現代の社会経済状況が恒久的な経済成長を追求する限り、AIシステムが持続可能になる可能性は低いと指摘しています。

7. Does AI change human self-understandings or identities, and does it do so for better and/or for worse? / AIは人間の自己認識やアイデンティティをどのように変えるのか、そしてそれは良い変化なのか悪い変化なのか?

In spite of this lack of sustainability, Heilinger et al. (2024, p. 210) also claim that ‘AI is likely to increasingly influence and shape human lives in the future’, which raises the question whether it might change human self-understandings or even human nature, and whether it might do so for better or for worse.

持続可能性の欠如にもかかわらず、Heilingerら(2024)は、「AIは今後ますます人間の生活に影響を与え、社会を形成するようになる可能性が高い」と主張しています。これにより、AIが人間の自己認識やさらには人間の本質そのものを変えるかもしれないという問いが生じます。そして、これが良い変化なのか悪い変化なのかについても議論が必要です。

Reflection:

Reflect for a moment on how the self-understandings or identities/characters of health care workers and patients who use AI might differ from those who do not use AI systems at all.

考察の時間

AIを使用する医療従事者や患者の自己認識やアイデンティティが、AIを全く使用しない場合とどのように異なる可能性があるか考えてみてください。

While it is hard to say how AI might alter the self-understandings or characters of health care workers and patients, it is likely that the use of AI systems alters people in profound ways. For example, the time that a doctor spends looking at a screen competes with the time that the doctor communicates with patients, which may affect the self-understandings of both. A patient who uses the internet to look up health-related information may also be transformed by what they read, which may influence how they approach a doctor, for example by enabling them to ask more relevant questions. A patient with a prosthetic limb that incorporates features of an AI system may also alter their understandings of themselves as their body is, arguably, enhanced by the prosthetic, which enables the person to move better.

AIが医療従事者や患者の自己認識や特性にどのような影響を与えるかを断言するのは難しいですが、AIシステムの使用が人々を大きく、深く変化させる可能性は高いでしょう。たとえば、医師が患者とコミュニケーションを取る時間がスクリーンを見つめる時間と競合する場合、これが医師と患者の双方の自己認識に影響を及ぼすかもしれません。他方で、患者がインターネットで健康に関する情報を調べることで、その知識によって自身の考え方や態度が変化し、医師に対してより的確な質問を行えるようになる可能性もあります。さらに、AIシステムを搭載した義肢を使用する患者が、自分の身体が補完され、動きやすくなったと認識することで、自己認識が変わるかもしれません。

A more profound change might be brought about by a system that detects levels of dopamine in the body and that administers medication to boost dopamine levels in Parkinson’s patients, who may go on to live with fewer symptoms of the disease because of it. An even more significant change might be brought about by an AI system that would detect the presence of particular (quantities of) hormones in the body, for example oxytocin, and release chemicals to restore any imbalances at appropriate times to facilitate pro-social behaviour. The identity of a person who may be quite anti-social might be altered quite significantly if they suddenly become more pro-social because of it.

さらに深い変化は、ドーパミンの量を検出し、それを補う薬剤を投与するAIシステムを使用する場合に見られるかもしれません。このようなシステムは、パーキンソン病患者が症状を軽減して生活することを可能にします。また、特定のホルモン(たとえばオキシトシン)の存在量を検出し、不均衡を修正する化学物質を適切なタイミングで放出するAIシステムがあれば、社会的行動を促進することができるかもしれません。反社会的な傾向があった人が、これによって急に社会的になる場合、その人のアイデンティティは非常に大きく変化する可能性があります。

Image 7: CC0; Low Section of Person with Prosthetic Leg; https://www.pexels.com/photo/low-section-of-person-with-prosthetic-leg-9623428/

AI is not the only technology that can profoundly impact human identity. For most of evolutionary history, human beings developed in environments that did not provide artificial light, for example. This raises the question whether we are adapted to living in such luminous environments, even if such environments have also provided significant benefits. Like artificial light, AI is likely to have transformed human beings significantly. This raises other interesting questions.

AIは人間のアイデンティティに深く影響を与える唯一の技術ではありません。たとえば、人類の進化のほとんどの期間、人工的な光源のない環境で育まれてきました。このことは、人工的な光に適応しているのかどうかという疑問を投げかけますが、人工的な光が多くの恩恵をもたらしていることも事実です。同様に、AIは人類を大きく変化させる可能性が高い技術であり、他にも興味深い疑問を提起します。

Reflection:

Reflect for a moment on these questions:

  • What is artificial?
  • How are different artificial things connected to each other?
  • How is what is artificial related to what is moral?

You might explore this theme further in Deckers 2023: chapter 8.3.

考察の時間

以下の質問について考えてみてください。

  • 何が「人工的」であると言えるのでしょうか?
  • さまざまな人工物同士はどのように関連しているのでしょうか?
  • 人工的なものと道徳的なものはどのように関係しているのでしょうか?

このテーマについては、Deckers(2023)の第8章3節でさらに詳しく探求することができます。

8. Do people produce conscious entities when they create AI systems? / AIシステムを作成する際、人々は意識を持つ存在を生み出しているのか?

Another interesting question is whether advances in AI technology might alter the AI systems that are being designed in very significant ways. Some scholars suggest that, at some point in the future, AI systems might perhaps become conscious or have experiences (Llorca Albareda et al. 2024). As consciousness/experience has been associated with (greater) moral status, AI systems might then perhaps no longer be regarded as objects that one can manipulate freely (objects that should be valued for their instrumental value), but as sentient entities that must be regarded as having a higher moral status because of it (that should be valued as ‘moral patients’ for their intrinsic value) (Deckers 2023: chapter 1.4).

AI技術の進歩により、AIシステムが非常に大きな変化を遂げる可能性があります。一部の学者は、将来的にAIシステムが意識を持ったり、経験を有する可能性があると示唆しています(Llorca Albaredaら、2024)。意識や経験は(より高い)道徳的地位と結びついているため、その場合AIシステムは、自由に操作できる物(道具的価値を持つもの)ではなく、意識を持つ存在としての道徳的価値を持つ「道徳的主体」としてみなされる可能性があります(Deckers, 2023: 第1章4節)。

Image 8: CC BY-ND 4.0; commander data star trek emoji | AI Emoji Generator; https://emojis.sh/

A different question is whether they might even be subjects with moral agency, or moral agents. Jecker and Nakazawa (2022, p. 770) remark that ‘the Japanese saying, Yoyorozu-no-Kami (8 million gods), expresses the idea that the Japanese see gods in everything’, and they suggest that Japanese people may be more open compared to many others to the idea of some AI systems being animated. Imagine an AI system that looks a great deal like a real doctor, for example a social robot with human-like features (for example, like Lieutenant Commander Data, an android or humanoid robot featuring in the Star Trek television series), such as a voice that simulates a human voice and a shape that resembles that of a human being. Some patients may come to regard such an AI machine as a real doctor, but would it be a real doctor (Llorca Albareda et al. 2024)?

 一方で、それらが道徳的主体ではなく、むしろ道徳的責任を負う「道徳的エージェント」になり得るかどうかという別の問いも存在します。Jecker and Nakazawa(2022)は、「日本の言葉である『八百万の神(やおよろずのかみ)』は、あらゆるものに神性が宿るという考えを表しており、日本人は多くの他国民よりもAIシステムを生命的存在として捉える可能性がある」と述べています。たとえば、人間に似た特徴(たとえば人間の声に似た音声や、人間の形状を模倣した外観)を持つ社会的ロボットが登場すれば、患者の中にはこのようなAI機械を本物の医師としてみなす人もいるかもしれません。しかし、それは本当に「医師」と言えるのでしょうか?(Llorca Albaredaら、2024)

Reflection:

Reflect for a moment on whether an AI machine might ever be a moral decision-maker? If so, who would be to blame if a bad decision was made: the AI system (now regarded as an AI doctor), the company that designed it, the person who decides to use it, or some combination of some of these?

考察の時間

AI機械が道徳的な意思決定を下すことが可能になるとしたら、その場合、誤った決定が下されたときに誰が責任を負うべきでしょうか?AIシステム(AI医師とみなされる存在)、それを設計した企業、それを使用した人、またはこれらの要素の組み合わせでしょうか?

Some nonhuman animals have been put on trial in the past.

過去には、一部の動物(人間ではない)が裁判にかけられた事例もあります。

Image 9: in public domain, from Chambers (1864), depicting a sow and her piglets being tried for the murder of a child. The trial is held to have taken place in 1457: the mother was found guilty and the piglets were acquitted.

Might the time now have come for some AI systems to carry some blame? A lot of philosophical work is currently being done to try to address whether some AI systems might be moral patients or moral agents. The important message for (training) health carers might be this one: the burden of proof should be on those who claim that AI systems might carry (part of) the blame. There does not seem to be sufficient evidence to suggest that human beings should be exculpated from any wrong decisions that they may make with the use of AI systems.

もしかすると、現在ではAIシステムに一部の責任を負わせる時代が来たのでしょうか?現在、多くの哲学的研究が「AIシステムが道徳的主体や道徳的エージェントとなる可能性」について検討しています。医療従事者(および今後医療従事者になる皆さん)への重要なメッセージは以下の通りです。責任がAIシステムにあると主張する者に立証責任があるべきです。AIシステムの使用による誤った決定から人間が責任を免れるべきだと示唆する十分な証拠は現時点ではありません。

AI systems cannot be blamed as they are not moral agents. Takahiro Nakajima (2024) writes rightly that ‘AI does not understand the “meaning” of the character strings it presents at all’ and that ‘chatting with AI is ultimately a monologue, not a dialogue’. However, one should be mindful that some AI systems designers may try to dodge responsibility for their creations by blaming the systems, rather than those who produced them.

AIシステムは道徳的主体ではないため責任を負うことはできません。中島隆博(2024)は、「AIは提示する文字列の『意味』を全く理解していない」「AIとの対話は最終的には独り言であり、対話ではない」と的確に指摘しています。しかし、一部のAI設計者が、自らの責任を回避するためにAIシステムに責任を押し付けることがないよう注意する必要があります。

9. Conclusion / まとめ

This learning resource was designed to stimulate ethical discussion of how health care might be improved and undermined by the development of artificial intelligence (AI) and how AI systems ought to be designed and used to promote health care. Topics included how to ensure that AI for health care is evidence-based, how one might navigate the black box of AI, how one might use AI data responsibly, what the ethical issues are related to the use of carebots, whether the use of AI for health care is sustainable and what should be done to ensure social and ecological sustainability, as well as whether AI changes us and whether we change the entities used to create AI systems. Health care workers and patients may wonder about many other topics related to AI. However, the main problems and opportunities associated with the use of AI systems in health care may be captured in this resource.

この学習教材は、ヘルスケアを向上させると同時に損なう可能性のある人工知能(AI)の発展について、またAIシステムがどのように設計され、ヘルスケアの促進のために使用されるべきかについての倫理的議論を喚起することを目的としていました。具体的には、以下のようなトピックを扱いました。

  • ヘルスケア向けAIがエビデンスに基づいていることをどのように保証するか?
  • AIのブラックボックス問題にどのように対処するか?
  • AIデータをどのように責任を持って使用するか?
  • ケアボット(介護ロボット)の使用に関する倫理的問題は何か?
  • ヘルスケア向けAIの使用は持続可能か?その社会的および環境的持続可能性をどのように保証するか?
  • AIは私たちをどのように変えるのか、そして私たちはAIシステムの作成に使用される存在をどのように変えるのか?

医療従事者や患者は、AIに関連する他の多くのトピックについても疑問を抱くかもしれません。しかし、ヘルスケアにおけるAIシステムの使用に伴う主要な問題や機会は、この教材で大部分を網羅しています。

Exercises/Group Discussion Questions / 演習・グループディスカッション

The following list of questions can be used to stimulate personal reflection, classroom discussion or essay writing:

1. What would you do to check whether an AI system that is being used in health care might provide good information?

2. Would you ever use an AI system in health care if you did not understand what the outputs of the system were based on? If so, what would be the conditions for using it?

3. What should health care workers and researchers do to ensure that they use AI data responsibly?

4. Would you ever condone the use of carebots? If so, what would be the conditions?

5. What should designers and users of AI systems in health care, as well as regulators, do to ensure that the use of such systems is sustainable?

6. How might the self-understandings or the nature of health care workers and patients be affected by the use of AI systems in health care?

7. Do you think that people produce conscious entities when they create AI systems, and what might be the moral implications of creating conscious AI systems?

このリストは、個人ワーク、授業でのグループディスカッション、レポート作成のに応用することができます。

  1. ヘルスケアに使用されるAIシステムが正確な情報を提供することを確認するために、どのような手段を講じますか?
  2. AIシステムの出力の根拠が不明な場合でも、それをヘルスケアで使用することに賛成しますか?その場合、どのような条件が必要ですか?
  3. 医療従事者や研究者がAIデータを責任を持って使用するためには、何をすべきですか?
  4. ケアボットの使用を容認しますか?容認する場合、その条件は何ですか?
  5. ヘルスケア向けAIシステムの設計者、ユーザー、そして規制当局は、その使用が持続可能であることをどのように保証すべきですか?
  6. ヘルスケア従事者や患者の自己認識や性質は、AIシステムの使用によってどのように影響を受ける可能性がありますか?
  7. AIシステムを作成する際、人々は意識を持つ存在を生み出していると思いますか?その場合、意識を持つAIシステムを作成することの道徳的な意味は何でしょうか?

References / 参考文献

Bolón-Canedo, V., Morán-Fernández, L., Cancela, B. and Alonso-Betanzos, A., 2024. A review of green artificial intelligence: Towards a more sustainable future. Neurocomputing, 599 (September), 1-10.

Chambers, R. 1864. The Book of Days: A Miscellany of Popular Antiquities in Connection with the Calendar, Including Anecdote, Biography, & History, Curiosities of Literature and Oddities of Human Life and Character. W. & R. Chambers.

Deckers, J. 2023. Fundamentals of Critical Thinking in Health Care Ethics and Law. Ghent: Owl Press.

Heilinger, J.C., Kempt, H. and Nagel, S., 2024. Beware of sustainable AI! Uses and abuses of a worthy goal. AI and Ethics, 4(2), 201-212.

Ho, A.S., Davies, L., Nixon, I.J., Palmer, F.L., Wang, L.Y., Patel, S.G., Ganly, I., Wong, R.J., Tuttle, R.M., and L.G. Morris., 2015. Increasing diagnosis of subclinical thyroid cancers leads to spurious improvements in survival rates. Cancer, 121(11): 1793-1799.

Jecker, N.S. and Nakazawa, E., 2022. Bridging east-west differences in ethics guidance for AI and robotics. AI, 3(3), 764-777.

Llorca Albareda, J., García, P. and Lara, F., 2024. The Moral Status of AI Entities. In Lara, F. and Deckers, J. (eds). 2024. Ethics of Artificial Intelligence, Cham: Springer, pp. 59-83.

Lee, E.G., Perini, M.V., Makalic, E., Oniscu, G.C. and Fink, M.A., 2023. External validation of the United Kingdom transplant benefit score in Australia and New Zealand. Progress in Transplantation, 33(1), 25-33.

Loughlin, M., 2008. Reason, reality and objectivity–shared dogmas and distortions in the way both ‘scientistic’and ‘postmodern’commentators frame the EBM debate. Journal of Evaluation in Clinical Practice, 14(5), 665-671.

Moyano-Fernández, C. and Rueda, J., 2024. AI, Sustainability, and Environmental Ethics. In Lara, F. and Deckers, J. (eds). 2024. Ethics of Artificial Intelligence, Cham: Springer, pp. 219-236.

Nakajima, T., 2024. Listening to the Daoing in the Morning. Paper presented at the Kyoto Institute of Philosophy, November 2, 2024.

Radha, G. and Lopus, M., 2021. The spontaneous remission of cancer: Current insights and therapeutic significance. Translational Oncology, 14(9), p.101166.

Robson, A., 2019. Intelligent machines, care work and the nature of practical reasoning. Nursing ethics, 26(7-8), 1906-1916.

Strickland, E., 2019. IBM Watson, heal thyself: How IBM overpromised and underdelivered on AI health care. IEEE Spectrum, 56(4), 24-31.

van Deen, W.K., Spiegel, B.M. and Ho, A.S., 2023. A narrative review of decision aids for low-risk thyroid cancer. Ann Thyroid, 8(3), 1-8.

Von Eschenbach, W.J., 2021. Transparency and the black box problem: Why we do not trust AI. Philosophy & Technology, 34(4), 1607-1622.

Wingfield, L.R., Ceresa, C., Thorogood, S., Fleuriot, J. and S. Knight. 2020. Using artificial intelligence for predicting survival of individual grafts in liver transplantation: a systematic review. Liver Transplantation, 26(7), 922-934.

Wu, C.J., Raghavendra, R., Gupta, U., Acun, B., Ardalani, N., Maeng, K., Chang, G., Aga, F., Huang, J., Bai, C. and Gschwind, M., 2022. Sustainable ai: Environmental implications, challenges and opportunities. Proceedings of Machine Learning and Systems, 4, 795-813.

New publication: Deckers, J. Fundamentals of Critical Thinking in Health Care Ethics and Law, Ghent: Owl Press, 2023.

Available from various locations, for example from here.

General information

As cutting-edge technologies continue to reshape the landscape of health care, we are faced with profound ethical and legal dilemmas on our journey towards a brighter future. This book invites you to develop your critical thinking skills in relation to a number of themes in bioethics and law, including our duties to care for each other, for nonhuman animals, and for the nonhuman world. While the book engages with the law as a source of guidance and food for thought, unlike most publications in health care ethics and law, the emphasis is on the development of critical thinking skills in ethics. Each chapter ends with a list of questions that act as prompts in your own critical thinking journey.

The book is printed on climate-neutral paper. Emissions are offset by supporting a clean drinking water scheme in Zoba Maekel, Eritrea. It supports communities in renovating their boreholes so that people have access to clean water.

I provide the table of contents below, as well as a brief summary of each chapter.

Table of contents

Chapter 1: A short introduction to health care ethics and law

Chapter 2: Autonomy and its limits

Chapter 3: Duties of care, confidentiality, candour, and cost minimisation

Chapter 4: The creation and use of human embryos for human reproduction

Chapter 5: When is it acceptable to use non-human animals to promote human health?

Chapter 6: Research ethics

Chapter 7: Ethics in relation to pregnancy termination

Chapter 8: Is genetic engineering justified?

Chapter 9: Human embryo research in embryonic stem cell and cloning debates

Chapter 10: Ethical and legal issues related to the end of life

Concise summary (chapter-by-chapter)

Chapter 1: A short introduction to health care ethics and law

I argue that there is an urgent need to develop critical thinking skills in health care ethics and law, as the health care needs of a large number of organisms are in jeopardy, in spite of the fact that we have the capacities to address many of them. In order to do so, it is good to reflect upon one’s meta-ethical theory to determine what ethics is about. It is also important to reflect on how one’s values shape one’s principles and theories, and what ethical theory might be best to adopt. While much health care ethics theorising focuses on abstract/formal ethical theories that are applied insufficiently to reality, I argue that it is much more important to reflect upon different axiologies (theories of which concrete things/entities should be valued, and what value each has).

I argue for a theory that includes a deontological (duty-based) and a consequentialist element: the duty to promote positive consequences for one’s own health. This is not accompanied by an individualistic axiology. Rather, this theory is compatible with an axiology that ascribes intrinsic value to all entities. A crucial question here is what the intrinsic values of different things are, and how much value one should give to one entity relative to the value of another entity. Our axiologies are influenced by our reflections on what different entities are, which is the subject of ontology (theory of reality).

I outline two dominant ontologies, mechanistic materialism and dualism. I identify problems with both and sketch an alternative ontology, ‘panexperientialism’, that might both inspire and be inspired by a different outlook on what matters.

The practice of health care ethics is not only shaped by ethics, but also by different health care professions and by the law. This is why health care professionals and patients must take heed of relevant professional guidance and law, while avoiding legalistic approaches to health care.

The chapter concludes by providing some practical tools that can be used in ethical reasoning, including the use of logic, analogies, and thought experiments. These tools are applied to different areas of health care ethics in the ensuing chapters.

Questions raised by this chapter:

1. What are the different meta-ethical theories that have been described in this chapter and why might meta-ethical reflection be important?

2. What is your theory of health care ethics?

3. What does it mean to ascribe intrinsic value, which entities should be valued intrinsically, and how would you weigh up different entities’ values?

4. What ontology do you adopt and how might this inform your ethical theory?

5. What is the relevance of professional guidance and law for health care ethics?

6. Do you agree with the view that logic is important in health care ethics? Justify your answer.

7. Could you provide an example of how an analogy or a thought experiment might be helpful in health care ethics?

8. Why might legalism be a problem?

9. What is your view on the (ir)relevance of slippery slope arguments?

10. ‘Plants are sentient beings. Therefore, plants should be valued intrinsically.’ Do you think that this argument is logically valid?

Chapter 2: Autonomy and its limits

I argue that the concept of autonomy is relevant in health care and that health care professionals should reflect critically on what the law demands from them when human patients are unable to consent due to a lack of autonomy. I also argue that the need to balance the values of autonomy and beneficence can present great difficulties when health care professionals consider the health care interests of children, including their interests in safeguarding. The chapter ends with a discussion of the value of liberty and how it may need to be limited for health reasons in some situations.

Questions raised by this chapter:

1. What should health care professionals do in order to make sure that patients consent?

2. What should health care professionals do in situations where patients lack capacity?

3. Why might it be appropriate for health care professionals to consider advance refusals from patients who lack capacity?

4. In what circumstances would you condone restricting someone’s liberty for health reasons?

5. Do you agree with the view that there are some aspects of care that patients should not be allowed to refuse?

6. How should health care professionals decide whether or not to provide health care treatment to a child?

7. What counts as child abuse?

8. What should health care professionals do when they think that continued treatment of an infant is not in the infant’s best interests and when the parents insist on its continuation?

9. Do you agree with the view that a competent child’s views on medical treatment should be allowed to be overridden?

10. How should a health care professional handle a situation where they discover that a child has been subjected to female genital mutilation?

Chapter 3: Duties of care, confidentiality, candour, and cost minimisation

I discuss the duties of care, confidentiality, candour, and cost minimisation. As health care professionals can fail in these duties intentionally or through being reckless, careful attention must be paid to how these duties can be fulfilled and to how some of these might need to be balanced with other moral considerations.

Questions raised by this chapter:

1. How can health care professionals ensure that they act in accordance with their duties of care?

2. What should be demonstrated to determine whether a health care professional has breached their duty of care?

3. What should health care professionals do to safeguard patients’ right to confidentiality?

4. In what situations might it be appropriate for health care professionals to divulge confidential patient information to third parties?

5. What should a health care professional do if the police ask for information about a patient to investigate a potential offence that took place on a road?

6. How can health care professionals ensure that they act in accordance with their duty of candour?

7. When might it be appropriate to mislead patients?

8. What might be the benefits and disadvantages of using the notion of QALY in decisions about how to allocate funding for different treatments?

9. How would you decide between offering a lung transplant to a 75-year-old person who recently stopped smoking and a 25-year-old person who has never smoked when both are clinically equally suitable for transplantation?

10. Which criteria would you use to discriminate between patients who may need intensive care due to infection with a coronavirus when not all patients can receive treatment on the intensive care unit?

Chapter 4: The creation and use of human embryos for human reproduction

I provide an overview of the views adopted in the Warnock Report and in UK law on the use of embryos for reproductive purposes. I show that the arguments underpinning this framework do not provide a firm foundation for legislation. I recognise that, while it is one thing to undermine a range of arguments that have been used to deny high moral status to the young embryo, it is another matter to make a convincing case for why the young embryo should be granted such status. It is important to recognise that people who debate human embryo research often portray the young embryo as if he or she were an abstract, alien entity, the product of those who experiment with substances in test tubes in laboratories. The moral position that young embryos lack high status might be favoured by this mode of representation. At the same time, however, some modern technologies, for example, ultrasound sonography, allow us to represent embryos and foetuses in more concrete ways than has been possible until recently. This might perhaps make it more likely for some to be able to empathise with them, and prompt them to assign a higher status to them than they might have done otherwise. My view is that we should grant equal moral significance to all human beings. I am uncomfortable with the idea that we should value some human beings more than others. I also argue that health care professionals and patients should consider a number of other issues related to fertility treatments, including the use of PGD, sex selection, the creation of ‘saviour siblings’, mitochondrial donation, and issues related to whom should be able to access (information about) such treatments.

Questions raised by this chapter:

1. What are the main issues associated with the creation and use of human embryos for human reproduction?

2. What is the UK legal framework on embryo research, what are its ethical underpinnings, and how has it influenced other jurisdictions?

3. What is the position on embryo research developed by the Committee of Inquiry into Human Fertilisation and Embryology?

4. How has the Committee of Inquiry into Human Fertilisation and Embryology influenced different laws on embryo research?

5. What is the argument from sentience? Is it valid?

6. What is the argument from individuality? Is it valid?

7. What is the argument from twinning? Is it valid?

8. What are the key issues associated with pre-implantation genetic diagnosis?

9. When, if ever, should pre-implantation genetic diagnosis be acceptable to diagnose disability?

10. When, if ever, should pre-implantation genetic diagnosis be acceptable to diagnose the sex of an embryo?

11. When, if ever, should pre-implantation genetic diagnosis be acceptable to diagnose whether an embryo is a suitable tissue match?

12. When, if ever, should mitochondrial donation be allowed?

13. What should be the conditions for someone to be allowed to receive fertility treatment?

14. What should be the conditions for someone to be allowed to donate gametes?

15. When, if ever, should those who are conceived with donated gametes have access to genetic information about their donors, and what information should they be allowed to access?

Chapter 5: When is it acceptable to use non-human animals to promote human health?

In this chapter I grapple with the question of when it might be acceptable to use non-human animals to promote human health. I start with the observation that people use non-human animals in various ways to promote human health, and explore two common ways in which they are used: their use in research and their use for human nutrition.

With regard to the research usage, I sketch some laws that legislate the use of non-human animals, highlighting in particular that widespread support for the principle of necessity and the 3Rs questions many projects that use non-human animals, given that such animals are poor models for human beings. In addition, I engage with the question whether non-human animals should be used to model human health and illness, even if they might be good models, where I argue that an account of the moral standing of different non-human animals must be based on evolutionism. In this light, it would be particularly problematic to use non-human animals for research that does not benefit them where the animals are closely related to us.

With regard to the human use of non-human animals for food, I argue that, if the underlying reasoning is applied consistently across different domains, EU legislation on the use of non-human animals for research would lend significant support for significant change in laws on the use of non-human animals for food, resulting in a drastic curtailment in the human consumption of animal products. I sketch the moral arguments underpinning qualified moral veganism, which is defended against some challenges. The chapter also considers the ethical issues related to radically novel ways in which animal products could be produced, including the development of lab-grown meat.

Questions raised by this chapter:

1. What might be the reasons behind the fact that most books on health care ethics and law do not consider the use of non-human animals to promote human health? Why do you (dis)agree with them?

2. What moral theory do you advocate in relation to the human use of non-human animals?

3. What are the key issues to consider when human beings use non-human animals for research?

4. What is the relevant law on the use of non-human animals for research, and what legal change do you advocate, if any?

5. What do the positions of Singer, Regan, and Midgley entail for the use of non-human animals for research?

6. What would the EU law on the use of non-human animals for research imply for the human use of non-human animals for food, if the law in relation to the latter was made consistent with the law in relation to the former?

7. What moral reasons might someone adopt in support of carnism and in support of qualified veganism?

8. What arguments could be used to support or undermine the use of non-human animals for human nutrition?

9. How would you evaluate the morality of technologies that aim to produce lab-grown meat?

10. What useful functions, if any, might be fulfilled by committees that evaluate particular projects to use non-human animals? Justify your answer.

Chapter 6: Research ethics

In this chapter I engage with generic issues that apply to research projects, as well as with more specific issues that pertain to research that is carried out in clinical health care contexts. I identify the benefits and disadvantages of different types of clinical studies and discuss whether clinical trials should only take place when there is clinical equipoise. A failure to conduct RCTs in particular may be unethical and may result in a stagnation of ideas, a misplaced trust in unsystematised clinical experience, little development in available treatments, and a waste of resources. I also discuss the relevance of complementary therapies, question the use of alternative treatments, set out why research ethics committees play a valuable role in health care research, and how those who sit on such committees might go about evaluating research projects.

Questions raised by this chapter:

1. Why is consent important in relation to research?

2. Do you think research should ever be allowed without consent from participants? Justify your answer.

3. What do you think about the view that any research should be allowed, as long as participants consent?

4. What are the key ethical features of the relevant laws in relation to health care research?

5. Why should many RCTs never take place? Justify your answer.

6. What safeguards should there be to make sure that RCTs do not expose participants to disproportionate risks?

7. Should people ever be incentivised to participate in research studies?

8. What do you explain to potential participants when you want to recruit them to your study?

9. Should children be allowed to participate in research? Justify your answer.

10. What is your view about the opinion that health care trials should only be allowed if there is clinical equipoise?

Chapter 7: Ethics in relation to pregnancy termination

I propose how abortion legislation in the United Kingdom should be modified if it was informed by the view that all unborn human beings should be granted a right to life that should be allowed to be trumped in a limited number of situations. I argue that the current distinctions in the legal provisions for ‘able’ and ‘disabled’ foetuses as well as for ‘implanted’ and ‘unimplanted’ embryos cannot be maintained, and that greater protection of all human life must be enshrined into law. I also argue that there should only be a limited right to conscientious objection to participate in the provision of abortion services. There should be no right to object conscientiously to providing abortion services when there is a great risk that a pregnant woman’s life might be lost should the pregnancy be continued, and no right to refuse pregnancy counselling and referral of those who satisfy any of the revised legal grounds.

I recognise that, whether abortion law is altered in line with this proposal both depends and should depend on whether a valid democratic process is instigated towards legal reform. It is my hope that, if abortion legislation were amended in accordance with this proposal, health care professionals would provide those services that women should be entitled to, give serious consideration to facilitating or providing abortions that should be allowed, and reject those that should be prohibited.

Questions raised by this chapter:

1. What are the salient points of the law on abortion in the different jurisdictions of the United Kingdom?

2. What should a health care professional consider when a patient requests an abortion?

3. Why might abortion pose a moral problem for health care professionals?

4. What do you think should be the legal boundaries regarding the right to conscientious objection related to abortion?

5. Should abortion be allowed without any restrictions? Justify your answer.

6. What shape should the law on abortion have? Justify your answer.

7. What is your position on the legality of using medicines that might be abortifacient?

8. If one adopts human egalitarianism, would it imply that abortion should never be allowed? Justify your answer.

9. Do you think men should have any say in relation to whether or not an abortion should be allowed? Justify your answer.

10. Should everyone who wants it have free access to IVF treatments? Justify your answer.

Chapter 8: Is genetic engineering justified?

In this chapter I discuss ethical issues related to genetic engineering. While there is no doubt that genetics has advanced our understanding about health and illness a great deal, technologies that use the science of genetics can both promote as well as undermine health. Physical health can be improved and undermined, both directly and indirectly, through genetic engineering. The same applies to mental health. With regard to the mental health impacts of genetic engineering, a significant concern that has received relatively little attention in the literature is the concern that we ought to avoid creating unnatural things, and that genetic engineering is unnatural.

Although nothing is unnatural in the sense that everything is part of nature, I argue that the widely used distinction between the natural and the unnatural is nevertheless not meaningless. A semantic distinction between the natural and the unnatural can be drawn whereby the latter pertains to that which is affected by human culture and the former to everything else. More importantly, I argue that the fact that human culture pervades many natural events does not eliminate the distinction, but that it is appropriate to situate the natural and the unnatural at opposite ends of a spectrum. Where an entity is situated along this spectrum depends on the likelihood with which its specific essence might have come about counterfactually, which in this case means naturally. I distinguish between three gradations of unnaturalness, in spite of this continuity.

This distinction between the natural and the unnatural has moral relevance. While we must adopt a prima facie duty to safeguard the integrity of nature, the integrity of nature should not be protected at all costs. Doing so would stifle all human activity. In order to flourish, Homo faber must alter nature. However, an action that alters a natural entity’s teleology more significantly is, ceteris paribus, more problematic compared to another action.

This discussion is highly relevant to evaluate genetic engineering. As genetic engineering projects normally involve type 1 instances of the unnatural, they are morally suspect. In spite of this, the example of Huntington’s disease shows that this does not imply that genetic engineering is necessarily wrong. However, if a type 2 or type 3 intervention existed that could enhance the quality of life of the person in question equally effectively, we ought to prefer it.

Questions raised by this chapter:

1. Why might the question of what is natural be relevant for a discussion of genetic engineering?

2. Do you agree with the view that there are gradations of artificiality? Justify your answer.

3. How might differences in degrees of naturalness be morally relevant?

4. How might genetic engineering be used to benefit human health?

5. How might genetic engineering undermine human health?

6. Do you approve of the creation of Herman the bull?

7. Would you approve of using genetic engineering on a human embryo to correct the gene that predisposes for Huntington’s disease, if such were possible?

8. What do you think of the view that there is nothing new in genetic engineering as nature has engineered itself for a very long time?

9. What do you think of genetic engineering projects that aim at making some non-human animals better models to study human disease?

10. Would you eat genetically engineered plants or animals? Justify your answer.

Chapter 9: Human embryo research in embryonic stem cell and cloning debates

In this chapter, I provide an overview of the views that have been expressed by advisory bodies and members of Westminster Parliament in support of legal developments to allow research on young human embryos in the United Kingdom. While UK law has inspired similar legal reform in many other countries, this chapter shows that the arguments underpinning this framework do not provide a sound basis for the current legal position. My view on the status of the young human embryo is at odds with the views underpinning this framework. Rather than denying the embryo high moral status, I adopt the view that we should consider all human beings to be equal, rather than make the question of what value should be assigned to a human being dependent on how many properties, capacities, or experiences a human being might possess.

Questions raised by this chapter:

1. How would you sum up the moral reasoning underpinning the Human Fertilisation (Research Purposes) Regulations 2001, and what do you make of the arguments that were developed to support these?

2. What are the two arguments from potentiality in relation to the status of the young human embryo and do you think that these arguments are sound?

3. What is the argument from capacities in relation to the status of the young human embryo and why do you (dis)agree with this argument?

4. What is the argument from probability in relation to the status of the young human embryo and why do you (dis)agree with this argument?

5. What is the argument from mourning in relation to the status of the young human embryo and why do you (dis)agree with this argument?

6. What is the argument from ensoulment in relation to the status of the young human embryo and why do you (dis)agree with this argument?

7. What policy would you like to adopt in relation to human embryo research? Justify your answer.

8. Would you favour altering the law on human embryo research so that human embryos can be used for research when they are older than 14 days? Justify your answer.

9. What is the relevance of the scientific advances that have been developed on the basis of human embryo research for the ethics of embryo research?

10. Do you agree with laws that allow the creation of human admixed or hybrid embryos? Justify your answer.

Chapter 10: Ethical and legal issues related to the end of life

In this chapter I consider when treatment might be futile, whether it may ever be appropriate to withhold or to withdraw treatment from a patient, whether pain relief that might hasten one’s death should be taken or provided, whether physician-assisted suicide and euthanasia should be legal options, and how health care professionals might cater for the spiritual needs of patients. These issues are difficult and emotionally challenging. In a culture where ageism is challenged and where speaking about death and the dying process might be more widely accepted, there is a good chance that people may feel better able to cope with the prospect of dying and with making decisions that promote well-being when it is hard to do so.

Questions raised by this chapter:

1. How might health care professionals go about determining whether or not a treatment is futile?

2. How might a health care professional justify withdrawing treatment from a patient?

3. Do you agree with the withdrawal of treatments for patients who are in a persistent vegetative state? How might you try to justify your answer?

4. Do you think that there are aspects of care that should never be withheld or withdrawn from patient, and if so, which aspects? How would you justify this?

5. What do you think of the view that English law on assisting suicide discriminates against disabled people?

6. Do you think assisting suicide should be allowed? Justify your answer.

7. Do you think euthanasia should be allowed? Justify your answer.

8. If assisting suicide were allowed, what do you think should be the conditions?

9. If euthanasia were allowed, what do you think should be the conditions?

10. Do you think there may be situations where those who aid in the suicide of a patient should (not) be prosecuted?

11. How might the doctrine of double effect be applied to the provision of pain relief to a dying patient?

12. Do you agree with the view that withdrawing artificial hydration and nutrition from a terminally ill patient should always be accompanied by terminal sedation?

13. What should health care professionals do when the parents of competent children demand life-saving treatment that the child refuses?

14. What do you think of the view that good palliative care is always preferable to treating the patient in order to end their life?

15. How might health care professionals optimally look after the spiritual needs of patients who adopt Christianity, Islam, Hinduism, Sikhism, Judaism, or Buddhism?

Monica Consolandi, Fondazione Bruno Kessler: See Monica’s review in the journal Theoretical Medicine and Bioethics here.

Julia Hynes, Kent and Medway Medical School: “In my view this is a book which will prove to be of benefit to healthcare students nationally, and even internationally, as it teaches the student how to think in a critical fashion. It may be used as a core text or one of two core texts. With prescribed reading set and with tutorial facilitation, it will enable the student to create and analyse philosophical arguments through the question prompts at the end of each chapter.”

Ghaiath Hussein, Trinity College Dublin: “What a fantastic achievement, … this book will undoubtedly be a valuable resource for us and our students.”

Artificial intelligence, health care, and ethics. A conference at Newcastle University

Location: Old Library Building, room 2.22

Time: 8 September 2023

Funded by:

Programme

 

TimeSpeakersTitle
11-11.30Stephen McGough and Jan DeckersIntroduction
11.30-12.00Adam Brandt & Spencer HazelGhosting the shell: exorcising the AI persona from voice user interfaces
12.00-12.30Emily PostanGood categories & uncanny kinds: ethical implications of health categories generated by machine learning
12.30-13.30Lunch breakLunch break
13.30-14.00Emma GordonMoral expertise and Socratic AI
14.00-14.30Shauna ConcannonLiving well with machines: critical perspectives on communicative AI
14.30-15.00Angus RobsonMoral communities in healthcare and the disruptive potential of advanced automation
15.00-15.30Andrew McStayAutomating Empathy and the Public Good: The Unique Case of Health
15.30-16.00Koji TachibanaVirtue and humanity in an age of symbiosis with AI
16.00-16.30Jamie WebbTrust, machine learning, and healthcare resource allocation
16.30-17.00Jan Deckersclosure
18.00 onwardsSocial dinner for presenters 

Abstracts

Stephen McGough and Jan Deckers: Introduction

As a scholar researching the areas of machine learning, big data, and energy efficient computing, Stephen will provide a sketch of what AI actually is. Jan will introduce the day, looking back at the presentations at the first event at Chiba University, and looking ahead at the presentations that will follow today.

 

Adam Brandt & Spencer Hazel: Ghosting the shell: exorcising the AI persona from voice user interfaces

Voice-based Conversational User Interfaces (CUIs or VUIs) are becoming increasingly ubiquitous in our everyday interactions with service providers and other organisations. To provide a more naturalistic user experience, Conversation Designers often seek to develop in their conversational agents what has been glossed as humanlikeness, namely design features that serve to anthropomorphise the machine. Guidelines suggest for example that designers invest time in developing a persona for the conversational agent, selecting an appropriate persona for the target user, adapting conversational style to the user (for discussion, Murad et al., 2019). Such anthropomorphic work may mitigate somewhat for the challenges of requiring users to behave as if they are speaking with a human rather than at a machine. However, in attempting to trigger a suspension the disbelief on the part of the user, allowing them to entertain the idea that what they are speaking at has some kind of sentience, raises a number of ethical concerns, especially where users may end up divulging more than they would like.

This presentation reports on work carried out by the authors in partnership with healthcare start-up Ufonia. The company has developed a voice assistant, Dora, for carrying out routine clinical conversations across a number of hospital trusts (see Brandt et al. 2023). Using insights and methods from Conversation Analysis, the collaboration has worked to ensure that these conversation-framed user activities achieve the institutional aims of the phone-call, while providing as the same time an unchallenging experience for the user. Conscious of the ethical dimension to which this gives rise, we discuss here how we help to pare down the importance of agent persona (managing the risk of users treating the agent as a person), while curating a more natural user experience by modelling the speech synthesis on the normative patterns of everyday (institutional) talk.

References:

Brandt, A., Hazel, S., Mckinnon, R., Sideridou, K., Tindale, J. & Ventoura, N. (2023) ‘From Writing Dialogue to Designing Conversation: Considering the potential of Conversation Analysis for Voice User Interfaces’, in [Online]. 2023

Murad, C., Munteanu, C., Cowan, B.R. & Clark, L. (2019) Revolution or Evolution? Speech Interaction and HCI Design Guidelines. IEEE Pervasive Computing. 18 (2), 33–45.

 

Emily Postan, Good categories & uncanny kinds: ethical implications of health categories generated by machine learning

One area in which there is particular interest is health applications of machine learning (ML) is in assisting not only in the detection of disease or disease risk factors, but also by generating new or refined diagnostic, prognostic, risk, and treatment response categories. Deep learning (DL), a powerful subset of ML, potentially offers marked opportunities in this area. This paper interrogates the ways in which uses of DL in this way may result not only in the classification of images, risks, or diagnoses, but also in reconfigured and novel ways of categorising people. It asks why categorisation by AI – and deep learning algorithms in particular – might have ethically significant consequences for the people thus categorised. The paper approaches these questions through the lens of philosophical treatments of ‘human kinds’. It asks to what extent AI-generated categorisations function, or could come to function, as human kinds. More specifically, it seeks to characterise how human kinds predicated on machine learning algorithms might differ, and differ in ethically significant ways, from existing human kinds that come about through social and historical processes and practices. It explores the potential impacts of AI-generated kinds on members’ experiences of inhabiting and exercising agency over their own categorisations. As such this paper pursues a line of ethical inquiry that is distinct from, though complementary to, more familiar concerns about the risks of error and discrimination arising from AI-enabled decision-making in healthcare. The paper concludes that while the impacts of machine learning-generated person-categorisations may not be unequivocally negative, their potential to alter our identity practices and group memberships needs to be weighed against the assumed health dividends and accounted for in the development and regulation of trustworthy AI applications in healthcare.

 

Emma Gordon: Moral Expertise and Socratic AI

A central research question in social epistemology concerns the nature of expertise and the related question of how expertise in various domains (epistemic, moral, etc.) is to be identified (e.g., Goldman 2001; Quast 2018; Stichter 2015; Goldberg 2009). Entirely apart from this debate, recent research in bioethics considers whether and to what extent cognitive scaffolding via the use of artificial intelligence might be a viable non-pharmaceutical form of moral enhancement (e.g., Lara and Deckers 2020; Lara 2021; Gordon 2022; Rodríguez-López and Rueda 2023). A particularly promising version of this strategy takes the form of ‘Socratic AI’ — viz., an ‘AI assistant’ that engages in Socratic dialogue with users to assist in ethical reasoning non-prescriptively. My aim will be to connect these disparate strands of work in order to investigate whether Socratic-AI assisted moral enhancement is compatible with manifesting genuine moral expertise, and how the capacity of Socratic AI to improve moral reasoning might influence our criteria for identifying moral experts.

 

Shauna Concannon: Living well with machines: critical perspectives on communicative AI

Artificial Intelligence (AI) is having an ever-greater impact on how we communicate and interact. Over the last few years, smart speakers, virtual personal assistants and other forms of ‘communicative AI’ have become increasingly popular and chatbots designed to support wellbeing and perform therapeutic functions are already available and widely used. In the context of health and social care, attention has begun to focus on whether an AI system can perform caring duties or offer companionship. As machines are positioned in increasing relational roles, what are the societal and ethical implications and should these interactions be modelled on human-human interaction?

 

In this talk, I will review recent developments in communicative AI, ranging from empathetic chatbots to storytelling applications tailored for children. Through this exploration, I will examine key risks and the potential for harm that these innovations may entail, and consider the implications that arise due to the ontological differences between human-machine and human-human communication.  Finally, I will consider what is required to guide more intentional design and evaluation of these systems, with a more central focus on interaction, moral care and social conduct.

 

Angus Robson: Moral communities in healthcare and the disruptive potential of advanced automation

The importance of healthcare organizations as moral communities is increasingly supported in recent research. Advanced automation has potential to impact such communities both positively and negatively. This study looks at one specific aspect of such potential impacts, which is here termed second-person displacement. Using Stephen Darwall’s idea of second-person standpoint, interpersonal events are proposed as a basic condition of the moral community which is threatened by automation in certain conditions of second-person displacement. This threat occurs in two particular respects, pervasiveness and humanness. If these two kinds of threat are understood and acknowledged, they can be mitigated by good design and planning. Some principles are suggested to assist strategies for responsible management of increasingly advanced automation, including protection of critical contact, building the resilience of the moral community and resisting deception.

 

Andrew McStay: Automating Empathy and the Public Good: The Unique Case of Health

Once off-limits, the boundaries of personal space and the body are being probed by emotional AI and technologies that simulate properties of empathy. This is occurring in worn, domestic, quasi-private and public capacities. The significance is environmental, in that overt and ambient awareness of intimate dimension of human life raises questions about human-system interaction, privacy, security, civic life, influence, regulation, and moral limits regarding the datafication of intimate dimensions of human life. This talk will discuss the historical context to these technologies, current technological development, existing and emergent use cases, whether remote profiling of emotion and datafied subjectivity is acceptable, and (if so) on what terms. Problematic in method and application, especially commercial uses, the use of automated empathy in healthcare is observed to raise unique questions. Tabling some of the legal and moral concerns, the talk will also flag citizen opinion obtained by the Emotional AI Lab (that McStay directs) on use of automated empathy and emotional AI in healthcare. This will be argued to be instructive for debate on the moral limits of automating empathy.

 

Koji Tachibana: Virtue and humanity in an age of symbiosis with AI

The social implementation of AI will accelerate in the near future. In such a society, we humans will communicate, work and live together with AI. What is human virtue or humanity in such a symbiotic way of living with AI? Examining several discussions of possible social implementations, I consider this question and argue that symbiotic life with AI is an excellent opportunity to understand humanity because we can perceive an absolute difference between humans and AI.

 

Jamie Webb: Trust, machine learning, and healthcare resource allocation

One use of machine learning in healthcare is the generation of clinical predictions which may be used in healthcare resource allocation decisions. For example, a machine learning algorithm used to predict post-transplant benefit could be involved in decisions regarding who is prioritised for transplant. When conducting interviews with patients with lived experience of high stakes algorithmic resource allocation, patients have occasionally expressed to me that they trusted how prioritisation was determined because they trusted the clinical staff involved in their care. This trust may be grounded in experience of compassionate and competent care. However, these demonstrations of trustworthiness may have nothing to do with the way algorithmic resource allocation decisions are made. Is this a problem? This presentation will explore this question with the use of philosophical theories of trust and trustworthiness, considering the particular challenges machine learning might bring to patient trust and the clinical staff-patient relationship.

Conference: Artificial Intelligence and Ethics: Medicine, Education, and Human Virtues

A conference on 26 May 2023 at Nishi-Chiba Campus, Chiba University, Japan

Program:
13:00-13:10 Opening Remarks
Koji Tachibana, PhD (Faculty of Humanities, Chiba University, Japan)


13:10-13:50 “Ethical challenges and opportunities related to the use of AI in health care”
Jan Deckers, PhD (Faculty of Medical Sciences, the University of Newcastle, UK)

Abstract: Health care decision-making is flawed as health care professionals and patients do not always know all the facts that are clinically relevant, may not be able to interpret facts, and may not be able to evaluate their moral significance. Whilst AI systems may help with health care decision-making by gathering more relevant facts and by helping with interpreting and evaluating data, health (care) may also be undermined by AI. This presentation sketches some significant hurdles that must be overcome to ensure that AI systems promote rather than undermine health (care). These hurdles include rational and emotional ontological confusion about the nature of AI, technological deficiencies, and problems related to how AI systems are being used.


13:50-14:30 “Ethical Issues regarding the application of AI to the healthcare Settings”
Eisuke Nakazawa, PhD (Faculty of Medicine, the University of Tokyo, Japan)

Abstract: Implementation of artificial intelligence in psychiatric care will bring innovations that contribute to patient well-being by reducing the burden on physicians and other healthcare professionals and by improving the accuracies of diagnoses and risk predictions. On the other hand, from an ethical standpoint, AI development research needs to include efforts to review opt-out consent from the perspective of the right to control one’s own information, with dynamic consent in the scope to ensure the autonomy of research participants. Medical-technical communication in which consensus is formed in advance between research developers and health care providers and the public, including patients, is necessary, and this is prominently required for the issue of secondary, incidental findings. Issues such as the burden on research participants due to false positives, respect for the right of research participants not to be informed of their results, and the social risk of false positives converge with the issue of how to communicate secondary and incidental findings to research participants. It is not unreasonable to be cautious about returning secondary and incidental findings, especially when adequate communication cannot be ensured.


14:30-15:10 “Exploring the ethics of smart glasses: Navigating the future of wearable tech”
Semen Trygubenko, DPhil (Dodrotu Limited, UK)

Abstract: The purpose of this study is to provide an overview of the ethical issues related to the use of smart glasses in order to facilitate decision-making and the formation of knowledge and norms. We identify a wide range of ethical issues, including privacy, safety, justice, change in human agency, accountability, responsibility, social interaction, power, and ideology. The use of smart glasses is expected to impact individual human identity and behavior as well as social interaction, which must be taken into account when developing, deliberating, deciding on, implementing, and using smart glasses. We consider the issues that are applicable generally as well as those that arise in the context of remote-calling functionality available in Ziru AV smart glasses prototype.


15:10-15:30 Tea/Coffee Break


15:30-16:10 “Can ChatGPT serve as a clinical ethics consultant?”
Yasuhiro Kadooka, MD, PhD (Faculty of Life-Sciences, Kumamoto University, Japan)

Abstract: Generally, healthcare professionals should make a well-balanced value judgment by consulting with colleagues or specialists when confronted by ethically uncertain situations. Currently, some professionals may ask a conversational large language model AI easily. This descriptive research aimed to explore the performance of ChatGPT on clinical ethics consultation, which is an advisory service to support healthcare professionals and patients in identifying, analyzing and resolving ethical dilemmas/issues of daily care. Human clinical ethics consultants participated and asked ChatGPT for advices on an ethically appropriate action in a hypothetical vignette. All conversations between the consultants and ChatGPT were recorded and analyzed qualitatively. Tentatively, this study emphasizes that the conversational large language model AI can instruct general principles/norms of clinical ethics, but may fail to make a holistic assessment of individual patients. The analysis is still ongoing. Detailed findings will be presented at the session.


16:10-16:50 “Upgrading feelings”
Jasmin Della Guardia, MS (Graduate School of Humanities, Chiba University, Japan)

Abstract: Fiction tells us stories about how AI can improve humans by taking humans to the next level, e.g. with Human Brain Interfaces like in Neon Genesis Evangelion, Iron Man or the Borgs in Star Trek. Such fictions portray an exaggerated duality of the AI, making us either superhuman or evil juggernauts. However, fiction meets reality, because every day AI merges more and more with our lives as tools (AI filters and art) or as (autonomous) operators (in cars, space travel, and robots; e.g. space robot “CIMON” whose should cheer up astronauts). The fears and dangers are also real and force a debate because this technology is changing the way we think about us and also contains human errors. But we are already cyborgs and AI is human too, so we need to discuss social, psychological, and ethical consequences. To avoid dystopian developments, we need to discuss how enhancing physical ability, attractiveness, creativity, and psychological well-being with AI can make us better people. As an example, we want to examine the influence of AI filters and interactions with AI as a social other on psychological well-being, the philosophical image of man and self-perception.


16:50-17:30 “What can humans learn from AI about creativity as an intellectual virtue?”
Ryo Uehara, PhD (Faculty of Informatics, Kansai University, Japan)

Abstract: Creativity has long been an object of consideration in philosophy, especially intellectual creativity as one of the intellectual virtues in virtue epistemology. On the other hand, recent artificial intelligence has shown remarkable creative abilities. Artificial intelligence, being an artifact, cannot be considered to have virtue. Nevertheless, it is expected that humans can learn something about the exercise of creativity as an intellectual virtue from artificial intelligence. This presentation will organize the debate on the creativity of artificial intelligence and clarify the differences from the creativity that can be demonstrated by humans. It will then examine how artificial intelligence can be used to help humans cultivate creativity as an intellectual virtue.


17:30-17:40 Closing Remarks & Announcements


18:20- Social Dinner (T.B.A)

Sponsors: The Great Britain Sasakawa Foundation & JSPS KAKENHI (20H01178)

Organisers: Koji Tachibana and Jan Deckers

What ought to be done to curtail the negative impacts of animal product consumption on the climate crisis, even if COP26 failed to do so?

In the 2015 Paris Agreement, 196 countries pledged to limit global warming to below 2 degrees Celsius, and preferably to 1.5 degrees Celsius, compared to pre-industrial levels. To achieve the latter goal, countries’ total emissions would need to be reduced by some 45% in 2030 compared to levels in 2010. The COP 26 meeting at Glasgow provided an opportunity to develop a strategy to achieve this, but it failed to do so. Consequently, we are not on track to meeting the Paris Agreement goals.

Continue reading

What did participants at COP 26 think about the consumption of animal products?

The food system, holistically considered, accounts for around a quarter of all anthropogenic emissions and can contribute greatly to attempts at mitigation due to the great potential of better land management in storing carbon. Recognising the role of agriculture in tackling climate change, the COP 23 meeting, held in Fiji in 2017, decided on a Koronivia Joint Work on Agriculture. Despite this initiative, in Glasgow, relatively few discussions took place on the role of the food system in relation to climate change, and even fewer considered the important contribution played by the consumption of animal products. In my previous post I pointed out that many non-vegan diets compare poorly with vegan diets when we consider the climate change impacts of human dietary choices as they contribute disproportionately to the release of carbon dioxide, nitrous oxide, and methane, and squander opportunities for carbon sequestration. In this post I report on some discussions that took place at COP 26 in relation to the consumption of animal products.

Continue reading

COP 26 and the moral imperative of a dietary transition

In early November 2021, thousands of people came together in Glasgow, at the 2021 United Nations Climate Change Conference, more commonly known as COP 26, to develop work on the 2015 Paris Agreement. The central goal of this agreement was to avoid driving up temperatures by more than 1.5° C relative to the pre-industrial level. This means that average emissions, measured in carbon dioxide equivalents per person annually, should be no more than about 2 tonnes. As average emissions are currently more than twice that, we are a long way from that goal.

Continue reading