Skip to content

Details

Special Event

I am speaking at NYC AI From Scratch. This is Dan's group (Dr. Daniel Barulli for those who don't know him). He is a friend of our group and has hosted multiple events with me, so let's return the favor and support his event.

Details

We may be living through a major turning point in human history.

For most of our existence, intelligence and agency were biological phenomena, the products of evolution, metabolism, and lived experience. Today, we are building systems that appear intelligent, goal directed, and increasingly autonomous. Whether or not these systems truly possess “agency,” their behavior is already reshaping how humans think, work, and relate to one another.

This session explores what happens when humanity begins creating entities that rival, or potentially exceed, our own cognitive capabilities.

Drawing on ideas from biology, cognitive science, philosophy, and AI research, we will examine the possibility of a “strange inversion,” a world in which humans create systems that begin to exhibit features once thought to be uniquely human.

What We’ll Explore

What is agency, and why is it so hard to define? We’ll unpack distinctions between extrinsic agency vs intrinsic agency, and consider where current AI systems fit.

What does it mean to be human in an age of intelligence tools? Human cognition is not mere computation, it is embedded in culture, language, meaning making, and shared narrative. We’ll explore what remains uniquely human and what may be more contingent than we assume.

How did we get here? From early biological regulation to language, culture, and modern AI, we’ll trace an evolutionary arc linking cognition, intelligence, and agency.

AI today and extrinsic agency: Modern AI systems can generate, plan, and persuade, yet their goals and norms remain externally imposed. We’ll discuss the operational and ethical implications of this distinction.

Persona, drift, and early signs of inversion. We’ll look at phenomena like persona formation and drift in large language models as empirical windows into identity like behavior, raising questions about system stability, responsibility, and alignment.

From tools to agents, what comes next? Using illustrative scenarios, we’ll explore trajectories in which systems accumulate memory, preferences, and self improvement capabilities, gradually blurring the line between tool and collaborator.

Co-existence, co-evolution, or crisis? Finally we will step back and examine the possible futures of human AI relations, from peaceful coexistence to deep co-evolution, and the risks of destabilizing technological change.

Format

This event will blend:

A conceptual lecture

Thought experiments and examples

Open discussion and audience Q&A

No tech background is required, only curiosity about the future of intelligence and humanity.

Why Should Attend

This session will be especially relevant to:

AI practitioners and technologists interested in the deep implications of these tools

Cognitive scientists and researchers

Philosophers and futurists

Policy thinkers

Anyone interested in the deep, sometimes uncomfortable questions AI presents

Related topics

AI and Society
Artificial Intelligence Machine Learning Robotics
Intellectual Discussions
Philosophy & Ethics
Systems Biology

You may also like