Artificial Intelligence

Artificial Intelligence

Could we ever have ethical obligations towards artificially intelligent beings?

I.e., could machines ever be moral patients?

  • The answer to this question depends on your moral framework.

Artificial Intelligence

  • Utilitarianism: Can robots feel pain or pleasure?
  • Kantian Ethis: Do robots have autonomy / freedom?
  • Social Contract Theory: Could we stand to gain something if we make a deal with robots?

Artificial Intelligence

Pain and Pleasure

  • How do we know whether ANYTHING experiences pain/pleasure?
  1. Behavior
  2. Internal makeup

Artificial Intelligence

Pain and Pleasure

  • We could definitely design robots to exhibit pain behavior.
    • Internal monitoring system + pre-programmed evasive behaviors.
  • This probably wouldn't be proof that they really experience pain, though.

Artificial Intelligence

Pain and Pleasure

  • Robots are currently silicon-based, rather than carbon-based, and their internal makeup is very different from our own.
  • But why should it be a matter of moral concern whether something is made of carbon or silicon? (compare: skin color)
  • There is currently no reason to believe that ONLY carbon-based life can support consciousness.

Artificial Intelligence

Pain and Pleasure

  • So, there is no reason to think that robots could NOT experience pain.
  • However, we also know that a machine might easily be programmed to replicate pain-behavior while feeling nothing.
  • So, there seems to be no way to know whether a robot is really in pain or just "acting like it."
  • Principle of Caution? "If a robot really acts like it's in pain, assume that it is."

Artificial Intelligence

Autonomy

  • A classical assumption is that robots/machines do not have autonomy - after all, they're programmed!
  • We tell machines what to do.

Artificial Intelligence

Autonomy

  • Two problems with this:
  • Machine learning (ML) - with sophisticated ML techniques, we DON'T simply tell the machines what to do.
  • How do we know that humans aren't simply "programmed" to act certain ways?

Artificial Intelligence

Autonomy

  • With machine learning, engineers tell the machines what counts as success or failure - however, they do not tell the machine "what to do."
  • The machine "learns" how to behave by finding patterns in data, or simply by trial and error (reinforcement learning).

Artificial Intelligence

Artificial Intelligence

Autonomy

  • With sophisticated ML techniques (e.g. deep neural networks), it may literally be true that the engineers do not even know how the machine has managed to solve the problem.

Artificial Intelligence

Autonomy

  • Are humans autonomous?
  • Many philosophers believe that all human thoughts and actions are determined by prior physical causes.
  • In brief, how do we know that we aren't simply "programmed" by our genetics and our environment to act in certain ways?

Artificial Intelligence

Autonomy

  • So, could a robot be autonomous / free?
  • Well, if autonomy means acting intelligently in a way that is unpredictable to its creators, then yes.
  • If autonomy means totally undetermined by physical causes or innate imperatives, then it isn't clear that humans have that property either.

Artificial Intelligence

Autonomy

  • There is no reason to think that robots could not exhibit the same kind of limited / relative autonomy that humans do.

Artificial Intelligence

Social Contract

  • The idea of a social contract only makes sense if both parties are free agents (a contract, by necessity, is an agreement between two free parties).
  • Therefore, social contract theory could only apply to robot-human relations if robots are capable of autonomy.
  • If robots cannot be free, then we don't need to compromise with them, we can just program them to obey us.

Artificial Intelligence

Social Contract

  • There is no reason to think that autonomy is impossible in machines, but that doesn't mean it necessarily exists, either.
  • So, it's unclear whether social contract theory is applicable to robots.

Artificial Intelligence

Two Principles

  • Two plausible principles when thinking about the ethics of A.I.:
  • Principle of Substrate Non-Discrimination:
    If two beings have the same functionality and the same consciousness experience, and differ only in how they came into existence, then they have the same moral status.
  • In other words, we shouldn't discriminate against machines simply because they were invented and designed by humans.

Artificial Intelligence

Two Principles

  • Two plausible principles when thinking about the ethics of A.I.:
  • Principle of Ontogeny Non-Discrimination:
    If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.
  • In other words, we shouldn't discriminate against machines simply because they are made of silicon, and not carbon.

Artificial Intelligence

Two Principles

  • In other words, the moral status of robots should depend on morally relevant facts (consciousness, decision-making, etc.), and not on what they're made of, or how they're made.
  • Comparable to anti-racist and anti-speciesist principles.

Artificial Intelligence

Weak vs. Strong AI

  • What does artificial intelligence really involve?
  • Weak AI: Domain-specific problem-solving capacities.
  • Strong AI: Domain-general problem-solving capacities.
  • "Real" intelligence seems to involve a capacity to understand, analyze, and think about ANY subject mater.
  • Strong AI, or artificial general intelligence (AGI) is what most researchers are interested in.

Artificial Intelligence

Turing Test

  • Philosopher / computer scientist / mathematician Alan Turing proposed a practical test to determine if a machine is intelligent.
  • Turing Test (imitation game): Let the experimenter have a conversation with someone/something and try to guess if it's a person or a machine. If the experimenter can't tell (the machine can "fool" the experimenters), then we conclude it is intelligent.

Artificial Intelligence

Turing Test

  • The basic insight is that if something is indistinguishable from human intelligence, then it must be intelligent.
  • Nevertheless, it seems pretty clear that chatbots can, or will soon be able to, fool people into thinking they're real.

Artificial Intelligence

Moving the Goal Posts

  • One problem with identifying AI is that the criteria seem to be shiftable.
  • People used to think that chess was a good criterion - only intelligent life could master chess.
  • Then, a computer ("Deep Blue") beat a world champion chess player. Now we don't consider that true AI.
  • The same thing happened with Go.
  • What do you think is the best test for true AI? 

Artificial Intelligence

Singularity

  • Many researchers discuss the notion of the singularity.
  • The singularity is a theoretical point in history where machines will permanently surpass human intelligence.
  • One thing we might want machines to do is design other intelligent machines.
  • Eventually, a machine will have the capacity to design a machine better than human engineers can do.
  • That new machine will have the capacity to design even better machines.
  • At this point, the development of machines will surpass human supervision, and there will be an explosion of artificial intelligence.

Artificial Intelligence

Costs and Benefits

  • What is one way that AI could make the world a better place?
  • What is one way that AI could make the world a worse place?
  • Overall, do you think AI will be a force for good in the world, or a danger?

Simulated Reality

Are we living in a computer simulation?

  • In the future, it will be possible to simulate sufficiently complete virtual worlds with conscious agents.
  • If it's possible, then we will probably generate a number of these simulations.
  • Therefore, in the course of history, there will be more simulated worlds (many) than real worlds (just one).
  • Therefore, it is more likely that we are living in a simulated world rather than the real world.

Simulated Reality

Would it matter to you if this were a simulated world?

Artificial Intelligence

By Jesse Rappaport

Artificial Intelligence

  • 1,237