Artificial Intelligence is becoming an increasingly more common part of our lives, whether we like it or not. Whether necessary for our species’ survival or an existential threat, it is clear that this technology is forcing us to consider the questions behind it all: What is the mind? What is consciousness? Are we anthropomorphising inanimate matter, or are we neglecting a sentient being? This paper looks at contemporary discussions surrounding modern AI, such as the likes of LaMDA and Dall-E, and how deeply rooted they are in conversations surrounding philosophy and psychology from the last two centuries, specifically those of behaviourism vs. functionalism. As well as aspects of the conversation which have been overlooked by AI research, such as psychoanalytical approaches, this paper uncovers rhetoric seen from all sides of the conversation which in some cases betrays questionable world views.
Tag: artificial intelligence
Since the late 20th century, Artificial Intelligence (AI) has been an exciting yet daunting topic of discussion for many disciplines, and within the last ten years, we have seen exponential growth in algorithmic use. In the UK specifically, since 2015, police departments nationwide have begun testing and introducing algorithmic-led predictive policing which uses historical data to recognise trends to predict crimes. Academics across many disciplines have widely acknowledged the potential for these systems to reinforce existing social bias. However, one critical issue has remained largely unexamined by such academics: the ominous implications of predictive policing algorithms for the victims of sexual violence within a rape culture.
This project offers an alternative criticism through a feminist lens of predictive policing algorithms, and delves into the power dynamics exercised in such a society along with the structure of oppression that may come from it. This project further shows that to solve the issue of rape culture, reform of individual beliefs and systemic power structures is needed instead of focusing on predicting the outcomes. Using Foucault’s disciplinary power, Deleuze’s Societies of Control, and Iris Youngs phenomenological and political philosophy, this project concludes that understanding the lived experience of women is the most effective way to combat rape culture and sexually violent crimes, not predictive policing. The relationship between cultural structures and physical embodiment shows that it is only on the individual level that we can deconstruct structures of power that permeate a culture, not through institutions.
The object of this dissertation is artificial intelligence (AI), and in particular it concerns AI risk or AI safety. I argue for the veracity of Bostrom’s orthogonality thesis (2012) – contextualised with reference to Hume’s (2007) is-ought distinction – and instrumental convergence thesis (developed initially by Omohundro (2008) in terms of the “Basic AI Drives”). In combination, what these theses show is that the default outcome of advanced AI (AGI and ASI) is existential catastrophe, and thereby the importance of ensuring that the value systems of advanced artificial agents are human compatible. I consider two main approaches to the value alignment problem – direct specification and value learning – and point out the flaws in each. While this project does not offer its own approach value alignment, the central concern of AI safety, it does emphasise the necessity for AI research to undergo a perspectival shift and focus on the search for one. The AI community should, that is, be concerned foremost with AI safety rather than AI capability.
My Territory: The territory of my essay is artificial intelligence; I will be looking at the progress it has made in the past decade, as well as the controversy it has sparked as a result.
My Object: My object is Sophia, a humanoid robot created in 2017 by Hanson Robotics Limited.
My Concepts: The main concepts I will be using in my project are: Human being, Personhood, Personal Identity, Persistence, Self-Ownership and Recognition
Philosophical Thinkers: The first philosopher I will be using to look at my territory is John Locke. I will be using his Essay on Human Understanding II, concentrating on his views on Personal identity. The second is Frederich Hegel; I will be looking at his Phenomenology of Spirit, particularly the sections on his theory of Recognition.
Main objective: I want to see if we would ever consider granting artificial intelligence the same rights as humans; to do so, I will be trying to find the necessary and sufficient conditions of personhood, and applying them.
I aim to talk about the possibility of consciousness arising within artificial intelligence with reference to two thinkers who have not yet been incorporated into the debate: Immanuel Kant and Andres Bretton. In doing so I hope to uncover new ways of talking about consciousness in less anthropomorphic ways.
Kant: Kant’s transcendental idealism can be used to propose a theory of the minimum requirements for consciousness to arise in artificially intelligent machines. In addition to this, the distinction he outlines between ‘reason’ and ‘understanding’ can be seen as analogous to the Turing Test and the Chinese Room thought experiment and therefore can be used to show the qualitative difference between our human experience of consciousness and any potential consciousness that might arise within artificial intelligence.
Bretton: Bretton’s Surrealist thought is used as a juxtaposition to Kant formulaic and systematic approach. The surrealist practices of automatism raise the question of a difference between human consciousness and potential consciousness within artificial intelligence in that it raises the issues of intentionality and the subconscious, something which artificial intelligence.
Aim: to show the importance of phenomenological investigation to the field of AI.
Philosophers: Husserl (micro-world systems in Logical Investigations), Heidegger (being-in-the-world in Being and Time), Dreyfus (problems with AI in various papers), Levinas (the importance of the Other in Totality and Infinity).
The Aim:
The aim of this project is to discuss the likelihood that machines are, by the standards of the Turing test already intelligent, or indeed are ever likely to be able to be described as being intelligent. Is it a problem with the Turing test if they cannot be described as intelligent? Or just something that machines lack?
Territory:
The territory is the realm of artificial intelligence and computers.
CONCEPTS/KEY WORDS: Thinking Machines: Philosophical implications of artificial intelligence, machines emulating human behaviour, Turing Test, notions of behaviourism, dualism and materialism, free will and determinism, strong and weak AI and intelligence. Mechanical Thinkers Affect of rise of technology on human behaviour. Dehumanising effect of treating people as machines in the work place. Modern emphasis on productivity, efficiency, and systematisation. Leisure time. Importance of play, playing at work, modern day work practices. OBJECTIVES 1. To investigate the philosophical implications of Artificial Intelligence, looking at factors that are taken into consideration outside the mathematical workings of a thinking machine such as notions of intelligence, behaviourism and free will. 2. To understand the philosopher Martin Heidegger’s opinion of the effect of technology on the world and on humans as a whole in his essay The Question Concerning Technology. 3. To look at a more modern interpretation of the effects of technology by way of Donald Norman, an expert on the human-side of technology, and his book The Invisible Computer. 4. To look at ways to combat the feeling of dehumanisation in using technology, particularly in the workplace, by investigating modern day work practices that incorporate work and play. SOURCES: Gottfried Leibniz, Alan Turing, Rene Descartes, Aaron Sloman, Donald Norman, Martin Heidegger, Herbert Marcuse, Institute for Play. PROJECT TERRITORY/FIELD OF EXPLORATION: I will use two companies that have adopted unconventional work practices in order to preserve the well being of their employees, producing a healthier environment which promotes quality of work rather than quantity. I will use an advertising agency called St. Lukes in London and a number of companies in the US who have adopted ingenious ways of improving their working environments. CHANGE The changes I will show are through the developments in the idea of a thinking machine, the change in the rise of technology and the way in which it affects our lives today. The difference in thought between Martin Heidegger and Donald Norman. THE GAP BETWEEN HUMANS AND THINGS Obvious separation of mind and matter is involved, the implications of modelling a machine on the brain, the difficulty for humans to work with machines that do not function as humans do, the separation between the individual and society when progress, and society with it, no longer facilitate individuality. My project tries to bridge the gap between humans and computers by trying to establish a healthier attitude towards them, especially in the workplace.