Free will and moral responsibility

I’m listening to the “Philosophize this” podcast about consciousness, and (doggone it!) there’s a section on whether free will is an illusion. The host mentioned the alleged connection between free will and moral agency. I.e., if we don’t have free will, we can’t be responsible for our actions.

What came to mind was a society (perhaps shortly in the future) in which we have humanoid robots.

I can easily imagine a scenario where (1) we do not believe the robots have free will, but (2) we still hold them accountable for their actions. E.g., we would dismantle bad robots, or limit their sphere of action or influence, even if they thought of that as a punishment.

IOW, it doesn’t seem obvious to me that a lack of free will therefore means that we are not responsible for our actions.

7 thoughts on “Free will and moral responsibility”

  1. People have been punishing dogs for ages without attributing free will to them, but perhaps one could argue that there is a kind of fictional free agent that is thereby posited. The same could go for robots. I have known people to yell at machines when one knows damned well there is no conscious entity there to apprehend the yelling. People live in a world full of fictions which are not truly delusional. Such conduct may not be rational, but that doesn’t seem to matter to most of us.

  2. If there was a robot that needed to be dealt with and it could feel pain, would it be wise to punish the robot for eternity so that it felt pain for some greater good? I can’t imagine what the greater good would be because all the other robots have no free will and the punishment itself wouldn’t make the robot world better. In that case, the only reasonable solution would to be just obliterate the robot into non-existence… idk… unless there was pleasure experienced by the torturer in torturing the robot.

  3. I can easily imagine a scenario where (1) we do not believe the robots have free will, but (2) we still hold them accountable for their actions.

    I guess I don’t understand that or I’m being persnickety about language. I write software. It has bugs. All software has bugs. Some you know about, some you don’t. And bug free software is a goal, but unless you have a really small piece of software, hard to achieve.

    I would never think of saying something like “holding accountable” about my software. In reality, I, or my employer should be the one held accountable. The software is a thing. All things break down.

    Would it be immoral to kill a robot? If humans really have no free will, why would it be immoral to kill them? Anymore than a chicken? or a robot?

    1. Think of it this way, then.

      Imagine you’re sponsoring a virtual conference in something like the metaverse (but a version that doesn’t suck). People “arrive” for the conference as avatars. You hire 25 virtual assistants to serve as help — to register people, help them meet good contacts, whatever.

      The virtual assistants are just programs, but in the metaverse they’re basically hired help.

      Some of them don’t behave the way you want. You don’t know how to fix or adjust their programming. You just have to fire them and hire some other virtual assistants.

      The question of free will is irrelevant. They’re not performing the way you want, so you hold them accountable for it.

      1. QUOTE: They’re not performing the way you want, so you hold themaccountable for it.

        Who is the “them”? Wouldn’t you hold the virtual assistant’s owner/service provider and/or programmer accountable?

        1. Perhaps at a secondary level, but at first you would do is fire the virtual assistants.

          1. One might get rid of the virtual assistants but it seems unclear how they can be held accountable for their performance given they were created (by another entity) to perform in that manner.

Comments are closed.