superintelligent agents


karim jebari and joakim lundborg

outline


  • why does it matter?
  • what is a machine agent?
  • the dynamics of agency
  • conclusion





 

superintelligent ai



value alignment


  • superintelligence
  • general agency

two scenarios



  • spontaneous emergence
  • accidental emergence

minimal agency


intelligence


  • intelligence /= agency




agency and desire 


The generality of an agent is related to the productivity of its desires 


productivity 


a desire is productive to the extent that it can direct behaviour in different situations. 

this is often done by generating new desires relevant to the context 

example 


Paperclip ai has a very productive desire. It may seem narrow, but it can direct behaviour in a wide variety of of contexts. 

Alphazero has very unproductive desires. 
can specialized AI acquire productive desires?

a desire can only be acquired from a set of pre-existing desires, an AI with a set of desires constrained to a specific domain cannot acquire desires relevant to other domains.

the humean model

  • belief
  • desire

  • directions of fit


    • belief: mind to world
    • desire: world to mind

    learning requires reinforcement




    • the world can reinforce our beliefs, but not our desires
    • desires can only be reinforced "from within"


    so this is wrong


    AI cannot become a general agent sui generis

    objections

    the second scenario
    self-preservation?
    natural selection
    pain








    conclusions


    creating a general AI agent requires a concerted effort






    Thank you!

    Karim Jebari

    jebarikarim@gmail.com
    politiskfilosofi.com
    twitter.com/karimjebari

    Superintelligent agents

    By Karim Jebari

    Superintelligent agents

    • 877