This is a quad chart containing some random notes about where AI might be headed, (dis)organized along two axes: Agency and Consciousness.

Not Conscious
Conscious
Not Agentic
Agentic
  • Oracle systems
  • Language Models without agent scaffolding
  • This is pretty satisfying for my personal aesthetics, seems economically valuable, will take time for humans to adapt and fully integrate.
  • However attractive this option is, it doesn't seem stable. It's too easy to make an MVP agentic system.
  • Seems less likely than the agentic options. My impression is that we are further along in operationalizing agency than we are in operationalizing consciousness.
    • Plenty of commercial interest in this. E.g.
    • Seems likely that we will be somewhere in this quadrant in the next couple of years. Less certain whether we will be close to human-level agency, for example whether AI systems will be able to operate on multi-year planning horizons autonomously.
    • Seems to be Jürgen Schmidhuber's view
    • From an impersonal perspective, seems more aesthetically satisfying than the agentic-but-not-conscious scenario.
    • Seems unpredictable.
    • Whether "adding consciousness" tends to improve predictability seems like an interesting question.

    Increasing Consciousness →

    Arguments for Increasing Consciousness

    Keywords:

    • Consciousness Studies
    • Consciousness Prior (Bengio)
    • Integrated Information Theory
    • Global Workspace Theory

    Increasing Agency ↓

    Arguments for why agency might increase Keywords:
    • AI Agents