099 We built the gods that will bury us
What happens when we build something smarter than ourselves? Smarter, faster, maybe even more ethical - if it can figure out what ethics even are. In this episode, we’re diving headfirst into the philosophical minefield of Artificial General Intelligence (AGI). This isn’t just about robots and algorithms; this is about us—what AGI learns from humanity, what it means for the future, and whether we’re ready for the world we’re creating.
We’re tackling the big questions. If AGI learns morality by watching us, does it inherit our flaws? Could it outgrow its teachers and decide humanity is more liability than asset? And if AGI does everything we do, what’s left for us?
We’ll explore Ian M. Banks’ Culture series - a vision of a universe where AGI Minds run entire civilizations and humans are left to play, create, or simply drift. Is this the best-case scenario, or a future where our freedom is an illusion, handed to us by machines that know better?
This is more than a discussion about technology - it’s a conversation about what it means to be human in a world where machines don’t just follow orders; they think for themselves. Will AGI save us, replace us, or make us irrelevant?
Much love,
David