Programming the Heart and the Rights of Robots- Emma F.
Until our recent discussions within the metaphysics unit, it felt a bit like I was floundering in the depths of my own metaphysical inquiry. Consolidating some of my own personal questions with others’ was refreshing and helpful in solidifying my thoughts and reframing my direction. In others ways, though, it opened up an almost overwhelming amount of additional questions.
For the sake of this blog post, I will mainly focus on my first discussion, as I found it the most constructive. Our group was comprised of Claire, Katie, Pourchista, and I, and all of our inquiries focused on what I would call the ‘threshold of humanness’. Claire was concerned with the dividing line between being and Being while Katie was investigating the role of emotions in creating an ‘authentic self’. And Pourchista, like myself, was interested in AI, specifically in the ethics of integrating AI into our own society. What I find interesting is that all of our topics somehow tackle the idea that there is more than one way of existing: as a functioning being (but perhaps not practicing ‘authenticity’) and an ‘authentic’, self-aware, conscious being.
This discussion took shape as each of us gave a brief description of our topic of inquiry. We prodded one another with questions, and made connections between our areas of study. There were a few stand-out conversations that have altered my own course and brought up additional questions
The first was a conversation that the group had surrounding Claire’s topic. Claire made the distinction between three aspects of our being: the heart (emotional, feeling), the mind (intellectual, judging), and the soul (spiritual). I was prompted to think back to AI; if all of an AI’s actions are sourced in man-made programming, can we still make this three part distinction? It seems to me that the AI is Westworld, for example, are programmed to appear as human as possible, which would mean replicating these three aspects in the android. But this has allowed a few questions to pop up for me. Can we program the heart? Or ‘feeling’? I suppose we can program an AI to exhibit behaviours of sadness when they see an animal being abused, for example, but I also think our ability to ‘feel’ is developed with experience. Our experience with others allows us to develop empathy, fears, anger. And these sorts of experiences are recalled with memory. Can recalling experiences through memory, for AI, improve the capacity to ‘feel’, or exhibit human emotions? Would those emotions be more genuine, or authentic than programmed reactions?
Secondly, Pourchista’s points on the ethics of AI were also super interesting. Pourchista believes that it is inevitable that AI will be integrated into our everyday life, and that we will eventually have to develop policies to protect AI rights. In terms of my own study in memory, I found this incredibly relevant. Should we afford AI rights to their own memory? For the androids in Westworld, are we performing a moral disservice by wiping their ‘memories’ each day and allowing them to live in ignorance of their true purpose? A tricky area to travese, I think.
Overall, I can’t say that many of my initial questions have been answered with these discussions. But hearing the voices of my classmates have, in a way, put my own topic into a larger context; it is easy to see how each of our inquiries interact with one another. Looking forward to Phil’s Day Off, I will be narrowing my focus towards finding things that resemble answers. My goal is to leave this project with some solid ground to walk on, and I am intent on doing so!