This is a quad chart containing some random notes about where AI might be headed, (dis)organized along two axes: Agency and Consciousness.
Not Conscious
Conscious
Not Agentic
Agentic
Seems less likely than the agentic options. My impression is that we are further along in operationalizing agency than we are in operationalizing consciousness.
- Plenty of commercial interest in this. E.g.
- LangChain,
- BabyAGI
- McKinsey Blog Post
- Seems likely that we will be somewhere in this quadrant in the next couple of years. Less certain whether we will be close to human-level agency, for example whether AI systems will be able to operate on multi-year planning horizons autonomously.
- Seems to be Jürgen Schmidhuber's view
- From an impersonal perspective, seems more aesthetically satisfying than the agentic-but-not-conscious scenario.
- Seems unpredictable.
- Whether "adding consciousness" tends to improve predictability seems like an interesting question.
Increasing Consciousness →
Arguments for Increasing Consciousness
Keywords:
- Consciousness Studies
- Consciousness Prior (Bengio)
- Integrated Information Theory
- Global Workspace Theory
Increasing Agency ↓
Arguments for why agency might increase Keywords:- AI Agents